Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I & J

13 views
Skip to first unread message

MarkWills

unread,
Jun 2, 2010, 5:06:37 PM6/2/10
to
Are I & J valid inside a BEGIN...WHILE...REPEAT loop construct?

Thanks

Mark

Elizabeth D Rather

unread,
Jun 2, 2010, 5:12:52 PM6/2/10
to
On 6/2/10 11:06 AM, MarkWills wrote:
> Are I& J valid inside a BEGIN...WHILE...REPEAT loop construct?
>
> Thanks
>
> Mark

No. The "indefinite loops" don't have any kind of loop index. Only the
DO/?DO "finite" loops have indices.

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

MarkWills

unread,
Jun 2, 2010, 5:32:02 PM6/2/10
to
On 2 June, 22:12, Elizabeth D Rather <erat...@forth.com> wrote:
> On 6/2/10 11:06 AM, MarkWills wrote:
>
> > Are I&  J valid inside a BEGIN...WHILE...REPEAT loop construct?
>
> > Thanks
>
> > Mark
>
> No.  The "indefinite loops" don't have any kind of loop index.  Only the
> DO/?DO "finite" loops have indices.
>
> Cheers,
> Elizabeth
>
> --
> ==================================================
> Elizabeth D. Rather   (US & Canada)   800-55-FORTH
> FORTH Inc.                         +1 310.999.6784
> 5959 West Century Blvd. Suite 700
> Los Angeles, CA 90045http://www.forth.com

>
> "Forth-based products and Services for real-time
> applications since 1973."
> ==================================================

Ah! (phew!).

Thanks Elizabeth,

Mark.

Hugh Aguilar

unread,
Jun 2, 2010, 5:48:14 PM6/2/10
to

I think what you are asking is, if you have a BEGIN..WHILE..REPEAT
loop nested inside of DO..LOOPs, are the I and J of the outer DO loops
valid inside of the BEGIN..WHILE..REPEAT loop? The answer is *Yes*.
The BEGIN..WHILE..REPEAT loop doesn't have any effect on the return
stack, which is where I and J are stored, so you still have access to
I and J.

I think E.R. thought that you were asking if the BEGIN..WHILE..REPEAT
loop creates its own I value. The answer to this, as she said, is No.
BEGIN..WHILE..REPEAT loops don't do anything under the hood for you.
You have a lot more latitude about what you are doing. You may not
even have an index value (for example, while reading lines from a
sequential file). Generally speaking, whatever data your WHILE is
testing will be held on the parameter stack, so none of this has any
effect on an outer DO loop. Your I and J are fully accessible inside
of the BEGIN..WHILE..REPEAT loop.

The Beez'

unread,
Jun 2, 2010, 6:53:30 PM6/2/10
to
On Jun 2, 11:48 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
I wouldn't wrap a begin/while inside a do/loop unless it could be
condensed to a single line. Then, maybe. Normally you would make it a
separate word and pass i/j as parameters. I really hate these Forth
"words" that surpass a legal page. That isn't Forth, that's C.

Hans Bezemer

Hugh Aguilar

unread,
Jun 2, 2010, 7:36:03 PM6/2/10
to

I don't like lengthy functions either --- no real Forth programmer
does.

Mark was asking about what is legal or illegal, and I was answering
him. E.R.'s answer most likely confused him; I don't think she
understood what his question was. She thought he was asking if
BEGIN..WHILE..REPEAT loops have an I index variable, which they don't,
but it is unlikely that he thought they did --- the I and J he was
referring to were of outer DO loops afaik.

The Beez'

unread,
Jun 3, 2010, 2:21:44 AM6/3/10
to
On Jun 3, 1:36 am, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> Mark was asking about what is legal or illegal, and I was answering
> him. E.R.'s answer most likely confused him; I don't think she
> understood what his question was. She thought he was asking if
> BEGIN..WHILE..REPEAT loops have an I index variable, which they don't,
> but it is unlikely that he thought they did --- the I and J he was
> referring to were of outer DO loops afaik.
I know. And you answered it completely correct. I just wanted to add a
note on good style. IMHO if you don't get it WHILE learning the
language it is harder to correct it later. Or worse, you start to
write programs that are Forth spagetti, hard to debug and throw away
the language in disgust. E.g. in this case it is much easier to
monitor the passing of the parameters to the BEGIN..WHILE..REPEAT loop
than to figure out what the stack diagram is INSIDE the loop.

Hans

Albert van der Horst

unread,
Jun 3, 2010, 5:06:45 AM6/3/10
to
In article <04d12354-52dc-49bb...@e28g2000vbd.googlegroups.com>,

MarkWills <markrob...@yahoo.co.uk> wrote:
>Are I & J valid inside a BEGIN...WHILE...REPEAT loop construct?

Yes, of course. A BEGIN .. WHILE .. REPEAT construct has no
loop index of its own. The indices of outer DO loops are available
just fine, like it would in an IF .. ELSE .. THEN construct

: test 10 0 DO 0 BEGIN DUP I + WHILE 1- REPEAT . LOOP ;
test

0 -1 -2 -3 -4 -5 -6 -7 -8 -9

>
>Thanks
>
>Mark

Groetjes Albert


--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Elizabeth D Rather

unread,
Jun 3, 2010, 3:14:18 PM6/3/10
to
On 6/2/10 11:06 PM, Albert van der Horst wrote:
> In article<04d12354-52dc-49bb...@e28g2000vbd.googlegroups.com>,
> MarkWills<markrob...@yahoo.co.uk> wrote:
>> Are I& J valid inside a BEGIN...WHILE...REPEAT loop construct?

>
> Yes, of course. A BEGIN .. WHILE .. REPEAT construct has no
> loop index of its own. The indices of outer DO loops are available
> just fine, like it would in an IF .. ELSE .. THEN construct
>
> : test 10 0 DO 0 BEGIN DUP I + WHILE 1- REPEAT . LOOP ;
> test
>
> 0 -1 -2 -3 -4 -5 -6 -7 -8 -9

Absolutely correct, and a clear example. However, I want to echo Hans'
caution about using nested loops that are too complex. As soon as you
get logic in your definition that's only somewhat more complex than this
sample definition, it becomes very difficult to test. A good rule of
thumb is that it's ok to nest loops (of any kind) one layer in a
definition if at least one of them is very simple, but as soon as there
are more layers or more complexity in each loop it's advisable to factor
out the inner part so you can test it thoroughly before running it in
the outer layers.

Hugh Aguilar

unread,
Jun 3, 2010, 4:04:37 PM6/3/10
to
On Jun 3, 1:14 pm, Elizabeth D Rather <erat...@forth.com> wrote:
> Absolutely correct, and a clear example.  However, I want to echo Hans'
> caution about using nested loops that are too complex.  As soon as you
> get logic in your definition that's only somewhat more complex than this
> sample definition, it becomes very difficult to test.  A good rule of
> thumb is that it's ok to nest loops (of any kind) one layer in a
> definition if at least one of them is very simple, but as soon as there
> are more layers or more complexity in each loop it's advisable to factor
> out the inner part so you can test it thoroughly before running it in
> the outer layers.

Well, we all seem to agree completely on the importance of factoring.

C programmers often give a lot of lip-service to the *idea* of
factoring, but they don't do it in practice. The reason is that they
debug everything by single-stepping through the entire program. If
they do factor out the inner loop into a separate function, they have
no way of testing that function in isolation. Whether the inner loop
is factored out or not, their plan is to single step through the
entire program, most likely bracketing the inner loop in breakpoints
--- it doesn't really matter to them if this inner loop is in a
function by itself, or is buried inside a big multi-page function. The
typical C programmer hasn't the slightest idea what the purpose of
factoring is. If asked, he will say: "Well, factoring can make the
code more readable, but it is just a matter of aesthetics with no
practical value." They are pretty much oblivious to the concept of
testing functions in isolation. This is a really horrible way to
develop software though. In Forth, we do a lot of factoring --- and
there is actually a purpose to this, as Forth allows functions to be
debugged in isolation. This, more than anything else, is why I prefer
Forth to C.

MarkWills

unread,
Jun 3, 2010, 4:35:01 PM6/3/10
to
On 3 June, 20:14, Elizabeth D Rather <erat...@forth.com> wrote:
> On 6/2/10 11:06 PM, Albert van der Horst wrote:
>
> > In article<04d12354-52dc-49bb-98b3-efd5feec5...@e28g2000vbd.googlegroups.com>,
> > MarkWills<markrobertwi...@yahoo.co.uk>  wrote:

> >> Are I&  J valid inside a BEGIN...WHILE...REPEAT loop construct?
>
> > Yes, of course. A BEGIN .. WHILE .. REPEAT construct has no
> > loop index of its own. The indices of outer DO loops are available
> > just fine, like it would in an IF .. ELSE .. THEN construct
>
> > : test 10 0 DO  0 BEGIN DUP I + WHILE 1- REPEAT . LOOP ;
> > test
>
> > 0 -1 -2 -3 -4 -5 -6 -7 -8 -9
>
> Absolutely correct, and a clear example.  However, I want to echo Hans'
> caution about using nested loops that are too complex.  As soon as you
> get logic in your definition that's only somewhat more complex than this
> sample definition, it becomes very difficult to test.  A good rule of
> thumb is that it's ok to nest loops (of any kind) one layer in a
> definition if at least one of them is very simple, but as soon as there
> are more layers or more complexity in each loop it's advisable to factor
> out the inner part so you can test it thoroughly before running it in
> the outer layers.
>
> Cheers,
> Elizabeth
>
> --
> ==================================================
> Elizabeth D. Rather   (US & Canada)   800-55-FORTH
> FORTH Inc.                         +1 310.999.6784
> 5959 West Century Blvd. Suite 700
> Los Angeles, CA 90045http://www.forth.com

>
> "Forth-based products and Services for real-time
> applications since 1973."
> ==================================================

Good advice from all. Thank you very much. Regarding complexity of
nested code, I have found that factoring at the 'point of complexity'
into a new word helps. I don't know why, the underlying complexity has
just moved. However, my brain finds each individual word/factor easier
to deal with on an individual level, I think.

Thanks everyone.

Mark

MarkWills

unread,
Jun 3, 2010, 4:35:55 PM6/3/10
to
>
> Well, we all seem to agree completely on the importance of factoring.
>
Indeed, Hugh. Wonders will never cease!

Seriously though, thanks for the advice and info.

Mark

MarkWills

unread,
Jun 3, 2010, 4:38:30 PM6/3/10
to

I've actually met people who pride themselves on being able to write
really obfuscated, hard to read C code. It's a badge of honour. A
shame.

Mark

Uwe Kloß

unread,
Jun 3, 2010, 5:11:02 PM6/3/10
to
Elizabeth D Rather schrieb:

> On 6/2/10 11:06 PM, Albert van der Horst wrote:
>> In
>> article<04d12354-52dc-49bb...@e28g2000vbd.googlegroups.com>,
>>
>> MarkWills<markrob...@yahoo.co.uk> wrote:
>>> Are I& J valid inside a BEGIN...WHILE...REPEAT loop construct?
>>
>> Yes, of course. A BEGIN .. WHILE .. REPEAT construct has no
>> loop index of its own. The indices of outer DO loops are available
>> just fine, like it would in an IF .. ELSE .. THEN construct
>>
>> : test 10 0 DO 0 BEGIN DUP I + WHILE 1- REPEAT . LOOP ;
>> test
>>
>> 0 -1 -2 -3 -4 -5 -6 -7 -8 -9
>
> Absolutely correct, and a clear example. However, I want to echo Hans'
> caution about using nested loops that are too complex. As soon as you
> get logic in your definition that's only somewhat more complex than this
> sample definition, it becomes very difficult to test.

And even this _simple_ gives a strong argument for factoring out and
testing the inner loop separately:

Assuming the inner loop is supposed to implement NEGATE, a factored out
and seriously tested version will show it's interesting properties when
applied to negative numbers. Especially on 32-bit or even 64-bit machines.

The embedded loop might not necessarily run into that snag during
testing because it might be more difficult to create stimuli to the
outer definition that generate the necessary input to the inner loop.

Grüße,
Uwe

Sp...@controlq.com

unread,
Jun 3, 2010, 5:34:59 PM6/3/10
to
On Thu, 3 Jun 2010, Hugh Aguilar wrote:

> Date: Thu, 3 Jun 2010 13:04:37 -0700 (PDT)
> From: Hugh Aguilar <hughag...@yahoo.com>
> Newsgroups: comp.lang.forth
> Subject: Re: I & J


>
> On Jun 3, 1:14 pm, Elizabeth D Rather <erat...@forth.com> wrote:
>> Absolutely correct, and a clear example.  However, I want to echo Hans'
>> caution about using nested loops that are too complex.  As soon as you
>> get logic in your definition that's only somewhat more complex than this
>> sample definition, it becomes very difficult to test.  A good rule of
>> thumb is that it's ok to nest loops (of any kind) one layer in a
>> definition if at least one of them is very simple, but as soon as there
>> are more layers or more complexity in each loop it's advisable to factor
>> out the inner part so you can test it thoroughly before running it in
>> the outer layers.
>
> Well, we all seem to agree completely on the importance of factoring.
>
> C programmers often give a lot of lip-service to the *idea* of
> factoring, but they don't do it in practice. The reason is that they

This is a rash generalization.

> debug everything by single-stepping through the entire program. If
> they do factor out the inner loop into a separate function, they have

This is false.

> no way of testing that function in isolation. Whether the inner loop

This is also false.

> is factored out or not, their plan is to single step through the
> entire program, most likely bracketing the inner loop in breakpoints
> --- it doesn't really matter to them if this inner loop is in a
> function by itself, or is buried inside a big multi-page function. The
> typical C programmer hasn't the slightest idea what the purpose of
> factoring is. If asked, he will say: "Well, factoring can make the

More stuff of a lazy imagination.

> code more readable, but it is just a matter of aesthetics with no
> practical value." They are pretty much oblivious to the concept of
> testing functions in isolation. This is a really horrible way to

Again, false.

> develop software though. In Forth, we do a lot of factoring --- and
> there is actually a purpose to this, as Forth allows functions to be
> debugged in isolation. This, more than anything else, is why I prefer
> Forth to C.

Hugh, I apply the same factoring approach to C as I do to Forth.
Frequently I've re-written code to make it more generally applicable, or
to identify a general case and re-use procedures in an intuitive way.

I see no need to denigrate either the C language, or C programmers to make
the case for Forth. It is quite possible that one person can program in
*BOTH* languages elegantly, professionally and in an aesthetically
pleasing manner. Oh, and your assumptions of how a debugger works in C
are largely without merit, and many C programmers don't write "big
multi-page functions". I understand that you have opinions, but I'm not
entirely clear on how you base them.

Rob Sciuk

The Beez'

unread,
Jun 3, 2010, 6:15:57 PM6/3/10
to
On Jun 3, 11:34 pm, S...@ControlQ.com wrote:
> I see no need to denigrate either the C language, or C programmers to make
> the case for Forth.  It is quite possible that one person can program in
> *BOTH* languages elegantly, professionally and in an aesthetically
> pleasing manner.  Oh, and your assumptions of how a debugger works in C
> are largely without merit, and many C programmers don't write "big
> multi-page functions".  I understand that you have opinions, but I'm not
> entirely clear on how you base them.
Well, I agree with several points here. First, C is a great language
for writing C programs. That may seem like stating the obvious, but C
was created as a vehicle to write operating systems, complex utilities
and languages. And there it does a great job - if used carefully. I've
seen a LOT of C programs though that use multi-page functions, pass
addresses of automatic variables when values would have done a much
better job. That have grown beyond behind being comprehensible. I've
seen a lot of Forth programs that work that way too. Yes, and there
are always excuses for that. There is no lack for excuses in this
world.

You can write very clever programs in C. You can write very fast
programs in C. You can write very elegant programs in C. You can write
very understandable programs in C. Rarely, you can combine all these
qualities in a single program. C has its weaknesses and you have to
know very well how C works (some say up to assembly level - after
optimizations) in order to get C to do what you want it to do.

My C style has significantly changed after I got to know Forth
properly and I'm happier with those programs than I was before. They
are MUCH easier to debug. I haven't used a debugger since (I did
before). But you can't turn C into Forth. There are variables, you
can't kill 'em. And calling a function within a function within a
function will never be as elegant and easy to read as Forth (you have
to read from right to left to get the order of execution). Parsing and
writing interpreters will never be as easy in C as it is in Forth. And
contrary to popular belief: I do maintain Forth programs and don't
find it very difficult. Just be sure to add stack diagrams to
important words.

But Forth has its weaknesses too. Unix like I/O is horribly
implemented. There is no standard for getting command line parameters.
The multitude of single, double, mixed and floating point words is
confusing, ugly and not transparent to the programmer. Too many people
rip open the magic toolbox and use horrible constructs like: POSTPONE
A POSTPONE B S" A B" EVALUATE. Chuck wanted a separate wordset for
these magic tools and I agree with them. Applications programming <>
System programming.

Forth cannot do C programs and C cannot do Forth programs. Tool and
task. That's the bottom line. I have both in my toolbelt and pull 'em
out when I need them. You can't blame a hammer it isn't a screwdriver.

In short, I see no reason to denigrate any of these wonderful tools
either. Compilers don't write horrible programs. People write horrible
programs.

For your entertainment, I published the following "tongue in cheek"
story at LX-er when a language related subject popped up:

"I like Forth. Forth is the most consistent programming language there
is. You read it from left to right. Every whitespace terminates a
token. A name can be composed of all the rest.

People love languages, because "they do so many things for them".
Forth is tiny and simple. Forth does nothing. You maintain your symbol
table. You manage your stacks. Those who are unable to balance their
occasional malloc() and free() calls can't cope without garbage
collection, because they have a habit of treating their storage
resources like a student tends to do his dorm - the electronic
equivalent of Mum and Dad so to say. This kind of people will love
Forth, because their have to balance both the return and the data
stack - not alone between function calls, but also with each and every
branch and loop instruction. The slightest imbalance will cause a
devastating crash and tear down your entire system.

Still, Forth programs are extremely small. Most subroutines can be
written in the space of a C prototype and are consequently one line
long. Forth doesn't dictate any format. You can indent Forth code any
way you like (there are so-called "long" forms and "short" forms, but
these are just uninteresting conventions nobody actually follows).
Most Forth compilers don't even care whether you feed them Mac, DOS,
Unix ASCII or even blockfiles (chunks of diskspace, 1K long,
unformatted).

Forth has no type checking, since there are no types to check. Forth
has one single datatype: the WORD. A WORD can be (depending on the
processor) 16, 32 or 64 bits. Forth has no parameters or function
prototypes as well, which really cuts down the code size. There are
complete line-editors, OO-extensions and full blown floating point add-
ons in 1K of source. But there is a dark side. C allows you to shoot
yourself in the foot, Forth will blow your head clean off.

Consequently, Forth is the language of choice of elite programmers
(e.g. NASA). Most Forth programmers write their own compiler,
preferably a compiler that can compile the compiler and little else.
There is a ANSI standard for Forth, but most programmers don't follow
it, instead they provide a convenient interface to a non-existent ANS
Forth compiler with their programs, so you can't run it anywhere, but
on the compiler it was originally written for. Or you can adapt the
program so it runs on yours. Since Forth is so terse and comments
(although there are many ways to define them) so rare, it usually
takes less effort to rewrite the entire thing. That's why Forth is
such a rich language. There are so many incarnations of "Hello world"
and "99 bottles of beer" to choose from!

The rest of the world may have lengthy discussions about which
language is best: Python or Perl, both multi megabyte monsters that
will take the breath out of the best processors available to mankind
and turn them to the equivalent of a 50-year long smoking asthmatic
obese old man.

Yes, my dear children, garbage collection, heap bashing, type- and
range checking do not come for free."

Hans Bezemer

MarkWills

unread,
Jun 3, 2010, 6:46:04 PM6/3/10
to
>C allows you to shoot yourself in the foot, Forth will blow your head clean off.

Ha ha! Love it!

Albert van der Horst

unread,
Jun 3, 2010, 9:22:09 PM6/3/10
to
In article <046040e0-f138-4592...@a30g2000yqn.googlegroups.com>,

MarkWills <markrob...@yahoo.co.uk> wrote:
>
>I've actually met people who pride themselves on being able to write
>really obfuscated, hard to read C code. It's a badge of honour. A
>shame.

I'm a winner of the Obfuscated C context, 1992. Still proud of it.
(But that is not what you meant ;-) )

>
>Mark

John Passaniti

unread,
Jun 3, 2010, 11:01:52 PM6/3/10
to
On Jun 3, 4:38 pm, MarkWills <markrobertwi...@yahoo.co.uk> wrote:
> I've actually met people who pride themselves on being able to write
> really obfuscated, hard to read C code. It's a badge of honour. A
> shame.

Be careful, because things aren't always what they seem.

As case in point... me. I took over the code base of a product that
was implemented in C. The programmer who had the code previously
wasn't very sophisticated in his understanding of C. It was clear he
didn't fully understand the duality of pointers and arrays, he created
bizarre and complicated casts, and most alarming, he didn't seem to
know what basic functions were in the standard C library. Take for
example tedious sequences like this:

if (cmd[0] == 'f' &&
cmd[1] == 'o' &&
cmd[2] == 'o' &&
cmd[3] == 0) do_something();
else if (cmd[0] == 'b' &&
cmd[1] == 'a' &&
cmd[2] == 'r' &&
cmd[3] == 0) do_another_thing();
else ...

That kind of nonsense went on for *pages*. And worse, there were
multiple groups of these kinds of command dispatchers. I could have
taken the path of least resistance and did this:

if (strcmp("foo", cmd) == 0) do_something();
else if (strcmp("bar", cmd) == 0) do_another_thing();
else ...

But I saw a larger opportunity and cleaned up the mess with code like
this (forgive any typos):

typedef struct {
char* name;
void (*function)(void);
} CommandTable;

CommandTable top[] = {
{ "foo", do_something },
{ "bar", do_another_thing },
...
0
};

int execute(char* cmd, CommandTable* command) {
while (command->name) {
if (strcmp(cmd, command->name) == 0) {
command->function();
return 1;
}
command++;
}
return 0;
}

The source code size went down dramatically. The object code size was
reduced significantly. And I have factored all of the command
dispatchers into "execute". Sounds good, right? Well, no. I was
told I was "too clever" and that this code was harder to understand.
And I guess, in a sense, they were right. If you are a terrible C
programmer who doesn't understand the basics, yes, this is more
complicated.

The point here is that one man's obfuscation is another man's
competency. I'm not saying your anecdotal reference is an example of
this, but do keep in mind that some people don't know what they don't
know. And they'll look at casual competency as being exotic.

John Passaniti

unread,
Jun 4, 2010, 12:28:25 AM6/4/10
to
On Jun 3, 5:34 pm, S...@ControlQ.com wrote:
> Hugh, I apply the same factoring approach to C as I do to Forth.
> Frequently I've re-written code to make it more generally applicable, or
> to identify a general case and re-use procedures in an intuitive way.

The thing about discussions like this is that they don't tend to go
anywhere. Hugh and others will point out obvious facts (like C not
having an interactive shell for testing) and will offer anecdotes--
which are probably true-- about some awful C code they've seen. And
there is no reason to believe it isn't true. But it's *also* true
that there are C programmers who have no problem writing beautifully
factored code and who *do* interactively test their functions. I mean
get serious-- the amount of code needed to drive most basic functions
is *trivial*. For my testing, I use a small library I wrote called
Tester that not surprisingly has a Forth flavor. I register the
function in a dictionary, telling the dictionary how many arguments it
returns and accepts. I can then call it in exactly the same way as
you can in Forth. And I've done the same thing under other shells--
years ago when I used vxWorks, it came with a shell that you could set
up to automatically read the symbol table and expose all your
functions to the shell. Zero work involved. That was even more
sophisticated, keeping track of type and automagically converting
datatypes when appropriate. I never bothered with that level of
sophistication in my own Tester, but if you have that information from
the compiler, it's not hard to pull off.

And there are other options too. There are C interpreters that let
you interactively type in code, and execute it in exactly the same way
as Forth. Again, I don't use such tools, but they're available. You
can even roll your own using a compiler like tcc, which can be
embedded inside an application.

Neither my Tester or things like the vxWorks shell or C interpreters
are as elegant as Forth. But it certainly isn't a Herculean effort to
get some of the benefits of Forth's interactivity for testing C
functions. Hell, even without Tester, exotic shells, or interpreters,
you can do simple ad-hoc kinds of things like this:

char* someComplexFunc(char* whatever, int something) { ... }

int main(int argc, char** argv) {
puts(someComplexFunc(argv[1], atoi(argv[2]));
}

Now I can run someComplexFunc from the command-line. Clearly, not as
elegant as Forth, but the only thing stopping a C programmer from
testing their functions is professionalism, maturity, and a sense of
responsibility.

> I see no need to denigrate either the C language, or C programmers to make
> the case for Forth.  It is quite possible that one person can program in
> *BOTH* languages elegantly, professionally and in an aesthetically
> pleasing manner.  

It's absolutely true there are terrible C programmers. I know as some
of my career has been spent cleaning up the messes they leave behind.
And yes, for some, the fact that C doesn't offer an interactive shell
combined with their own lack of imagination and initiative means that
they write horrific code without serious testing. But ultimately, the
problem isn't the language. It's the programmer. Quality code-- code
that is tested, correct, factored, elegant, and efficient-- ultimately
comes from the skill, experience, and insight of the programmer. Or
put another way, if you find a C programmer who writes crappy code,
it's a safe bet that regardless of the language they are programming
in, they'll write crappy code. If they lack the discipline to test
and the understanding of why factoring is so important, then putting
them in front of an "ok" prompt isn't magically going to change that.

And it works the other way too. The first language (aside from
assembler) that I learned was Forth. And so by the time I started to
study other languages, I had already developed a need to test what I
wrote and to seek out every opportunity to factor. Those fundamental
lessons didn't magically disappear when I had to use C. The fact that
it was harder in C didn't make me abandon those lessons.

It should be also stated that the Forth community loves to talk about
testing, but far more often than not, such testing is just interactive
poking at definitions from the keyboard. And that's great and useful,
but it's also lost. When a Forth programmer hands you some code they
wrote and proudly say they tested it, that's a matter of faith that
the tests were complete. A test suite preserve the tests, and
documents what was tested. I don't have to take it on faith; I can
see what was and wasn't tested. Test suites also allow you to run
those same tests at any time to see if a change to code broke
anything.

And beyond even that, modern notions like Test Driven Development
start with the idea that you don't write any code until you have
designed a test for it. That is perhaps the strongest statement
possible about testing possible. It's growing in popularity as a
disciplined approach to writing code, but I haven't seen any (public)
Forth code written like that.

Finally, it's a programming aphorism, but usually far more true than
not: Who is the worst possible person to test code? The programmer.
Every programmer brings a set of assumptions and expectations to the
code they write and they're only going to test within the limits of
those assumptions and expectations. That should be obvious-- defects
in tested code occur because the tests didn't provide enough
coverage. Maybe it was a failure to keep a value within a domain.
Maybe it was a failure to exhaustively test a set of conditionals.
The defect exists because the tests were inadequate. So testing will
only take you so far.

Rod Pemberton

unread,
Jun 4, 2010, 10:49:24 AM6/4/10
to
"Hugh Aguilar" <hughag...@yahoo.com> wrote in message
news:7f15bea7-ee3d-45b4...@w12g2000yqj.googlegroups.com...

> C programmers often give a lot of lip-service to the *idea* of
> factoring, but they don't do it in practice.
...

> The reason is that they
> debug everything by single-stepping through the entire program.

What? Ludicrous flame baiting...

All you need to "debug" C is a printf() statement. That's it. If you need
anything else, you don't know what you're doing.

I've only used a debugger to find a bug once in my life and it was for a
very large (5MLoc), multiple process, PL/1 program.

> The
> typical C programmer hasn't the slightest idea what the purpose of
> factoring is.

That's true. And, it usually makes no sense, or very little, to use
factoring for C. "Factoring" as applied to C will result in large numbers
of procedures. Every time you create a procedure, you have to store, pass,
read-in parameters, allocate new space on a the stack for local variables,
etc. I.e., overhead. For C, the goal is to minimize the data copying and
allocation/deallocation necessary to maintain variables. C compilers don't
automatically inline the procedures during their optimization, which would
eliminate this overhead. To do so, the programmer has to go through and
code compiler specific directives to inline the procedures...

> In Forth, we do a lot of factoring --- and
> there is actually a purpose to this, as Forth allows functions to be
> debugged in isolation.

That's true. But, it also leads to overly factored code. I.e., the code is
reduced to the point where it runs slowly. A less-factored version, many
times, will run faster.


Rod Pemberton


Doug Hoffman

unread,
Jun 4, 2010, 11:01:45 AM6/4/10
to
On Jun 4, 10:49 am, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:
> "Hugh Aguilar" <hughaguila...@yahoo.com> wrote in message

>
> news:7f15bea7-ee3d-45b4...@w12g2000yqj.googlegroups.com...> C programmers often give a lot of lip-service to the *idea* of
> > factoring, but they don't do it in practice.
>
> ...

> > In Forth, we do a lot of factoring --- and


> > there is actually a purpose to this, as Forth allows functions to be
> > debugged in isolation.
>
> That's true.  But, it also leads to overly factored code.  I.e., the code is
> reduced to the point where it runs slowly.  A less-factored version, many
> times, will run faster.

Thank you for pointing out what I also believe to be true. That is,
it is possible to over-factor. Further, I find overly factored code
can be less readable/maintainable. It also can lead to naming issues
(too many names to create and remember). Note that this is *not* to
say that I don't believe in factoring. I believe in appropriate
factoring, which admittedly is a bit hard to define and I'll not
attempt it.

-Doug

Rod Pemberton

unread,
Jun 4, 2010, 11:40:51 AM6/4/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:17b8aa96-67a2-4070...@r27g2000yqb.googlegroups.com...

> As case in point... me. I took over the code base of a product that
> was implemented in C. The programmer who had the code previously
> wasn't very sophisticated in his understanding of C. It was clear he
> didn't fully understand the duality of pointers and arrays, he created
> bizarre and complicated casts, and most alarming, he didn't seem to
> know what basic functions were in the standard C library. Take for
> example tedious sequences like this:
>
> if (cmd[0] == 'f' &&
> cmd[1] == 'o' &&
> cmd[2] == 'o' &&
> cmd[3] == 0) do_something();
> else if (cmd[0] == 'b' &&
> cmd[1] == 'a' &&
> cmd[2] == 'r' &&
> cmd[3] == 0) do_another_thing();
> else ...
>

Obviously, this was for a time critical piece of code, yes?

> It was clear he
> didn't fully understand the duality of pointers and arrays,

None of the self-appointed C experts on comp.lang.c do either... If the few
top guys there don't, why would you expect that some average "Joe" would?
All he had to do was study answers from the test archive of his Fraternity
to get his CS degree. IMO, the only true C expert _still_ posting to NG's
that does is Douglas Gwyn.

> he created
> bizarre and complicated casts,

Painful for you to read, easy for him, but optimized away by the compiler
nonetheless...

> The source code size went down dramatically. The object code size was
> reduced significantly. And I have factored all of the command
> dispatchers into "execute". Sounds good, right? Well, no. I was
> told I was "too clever" and that this code was harder to understand.

ROFL... I suspect you're fibbing, but it was funny anyway. Yes, some
places do prefer to hire the lowest cost employees they get. So, it has a
very small hint of truth to it.

> And I guess, in a sense, they were right. If you are a terrible C
> programmer who doesn't understand the basics, yes, this is more
> complicated.
>

Curious, did you ever time the old and new?

While I haven't seen such "nonsense" in C code, I did see it in PL/1 code.
It was used all the time. The application was a 5MLoc realtime OLTP program
in a PL/1 variant. I.e., a very, very time critical application. It ran
faster. Why? Well, it worked the same way for PL/1 that it should work for
the "non-sense" C code above:

1) there is overhead for variables to index the array
2) there is no parameter passing overhead
3) there is no function call overhead
4) the logical operators only compute necessary terms to determine the state
for the conditional

So, tell us John, does the "nonsense" version actually run faster? You
seemed to have intentionally left that part out of your post and criticized
other insignificant aspects... e.g., source code size, object code size,
clarity, cleverness, etc.


Rod Pemberton

John Passaniti

unread,
Jun 4, 2010, 12:21:45 PM6/4/10
to
On Jun 4, 10:49 am, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:
> All you need to "debug" C is a printf() statement.  That's it.  If you need
> anything else, you don't know what you're doing.

Wow, you and Hugh deserve each other.

A "printf" isn't going to help you when you're debugging real-time,
high-speed events. A "printf" isn't going to help you when the target
is a DSP and the only inputs and outputs to the processor are audio or
video data streams. A "printf" isn't going to help you when you're
debugging a communications protocol, and the only place you have to
print is the communications channel. To get all Shakespeare on you,
there are more things in heaven and earth, Rod, than are dreamt of in
your philosophy.

One of my roles has been to conduct technical interviews for job
candidates. And one of the questions I would often ask candidates is
how they would debug a system where they couldn't use something like
printf. If they couldn't answer the question or if they (like you)
confidently proclaimed that printf was the only tool anyone needed,
they didn't last long in the interview. There are a wide number of
both standard and creative answers, from logic probes and
oscilloscopes on port pins, to taking over LEDs on the system, to
keeping a circular buffer with diagnostic messages in memory for post-
mortem review. And of course there are dedicated debuggers-- in-
circuit emulators, JTAG/BDM tools, logic analyzers, and so on.

> I've only used a debugger to find a bug once in my life and it was for a
> very large (5MLoc), multiple process, PL/1 program.

And I use the tools I have available to me, not because I don't know
what I'm doing, but because they make it much more efficient.

> > The
> > typical C programmer hasn't the slightest idea what the purpose of
> > factoring is.
>
> That's true.  And, it usually makes no sense, or very little, to use
> factoring for C.  "Factoring" as applied to C will result in large numbers
> of procedures.  Every time you create a procedure, you have to store, pass,
> read-in parameters, allocate new space on a the stack for local variables,
> etc.  I.e., overhead.  For C, the goal is to minimize the data copying and
> allocation/deallocation necessary to maintain variables.  C compilers don't
> automatically inline the procedures during their optimization, which would
> eliminate this overhead.  To do so, the programmer has to go through and
> code compiler specific directives to inline the procedures...

Oh. My. God.

I knew people like you existed. After all, part of my career has been
cleaning up the total shit that people with your attitude generate.
But it's rare to find someone who so proudly displays his lack of
sophistication and experience as if it was a badge of honor.

There are many reasons to factor in *every* language. Reduction of
redundancy. Reduction in code size. Enhanced maintainability.
Clearer code. Easier extensibility of code. Every last reason why
Forth programmers advocate factoring applies to *every* language.
Your unfounded (and I'll bet unmeasured) fears about the overhead of a
function call in C are not an excuse for you to be a terrible
programmer.

John Passaniti

unread,
Jun 4, 2010, 12:47:42 PM6/4/10
to
On Jun 4, 11:01 am, Doug Hoffman <glide...@gmail.com> wrote:
> Thank you for pointing out what I also believe to be true.  That is,
> it is possible to over-factor.  Further, I find overly factored code
> can be less readable/maintainable.  It also can lead to naming issues
> (too many names to create and remember).  Note that this is *not* to
> say that I don't believe in factoring.  I believe in appropriate
> factoring, which admittedly is a bit hard to define and I'll not
> attempt it.

The goal with factoring should be to find repeated sequences of code
and give them meaningful names. An explosion of names can be hard to
manage if those names aren't meaningful or collected into a single
huge namespace. A large part of what makes object orientation popular
is how it manages definitions by effectively making the object a
namespace. Having a definition called "bounce" in a sea of a thousand
other definitions is indeed difficult because it could mean anything.
Having that same definition bound to an object that represents a ball
in a game or a command in an email processor has a crisper meaning and
is thus easier to keep in one's mind. But even if one doesn't want to
use object orientation, meaningful names broken into sensible word
lists is always a good idea.

I guess it's possible to over-factor. What is far more likely isn't
that the code has too much factoring, but terrible names for the
factors. Or, there is what I might call "anticipatory factoring"
where the programmer thinks that some unique code sequence deserves a
name because it may be used again, but never is.

The Beez'

unread,
Jun 4, 2010, 1:04:02 PM6/4/10
to
On Jun 4, 6:21 pm, John Passaniti <john.passan...@gmail.com> wrote:
> Oh. My. God.
>
> I knew people like you existed.  After all, part of my career has been
> cleaning up the total shit that people with your attitude generate.
> But it's rare to find someone who so proudly displays his lack of
> sophistication and experience as if it was a badge of honor.
>
> There are many reasons to factor in *every* language.  Reduction of
> redundancy.  Reduction in code size.  Enhanced maintainability.
> Clearer code.  Easier extensibility of code.  Every last reason why
> Forth programmers advocate factoring applies to *every* language.
> Your unfounded (and I'll bet unmeasured) fears about the overhead of a
> function call in C are not an excuse for you to be a terrible
> programmer.
Well, you have to acknowledge there is more to life than programming
Coca Cola vending machines. Yes, there are many reasons to factor and
usually I do. When speed is not a great issue. Even in Forth calling a
word takes a performance hit otherwise ">R BRANCH R> BRANCH" would
take no time whatsoever. If speed is an issue then function call
overhead IS an issue, e.g.
- FICL (switch threading, horrible but speedy)
- gForth (switch threading with GCC extension, even more horrible
code)
- 4tH (switch threading, like FICL).

I know of programmers that aren't even aware of function call overhead
and they make up the majority. They produce that horrible code. And at
job interviews it's a bad attitude to look for copies of yourself,
BTW.

Hans Bezemer

Bernd Paysan

unread,
Jun 4, 2010, 1:04:11 PM6/4/10
to
John Passaniti wrote:
> There are many reasons to factor in *every* language. Reduction of
> redundancy. Reduction in code size. Enhanced maintainability.
> Clearer code. Easier extensibility of code. Every last reason why
> Forth programmers advocate factoring applies to *every* language.

Of course. And of course, factoring has an overhead in every language.
Even in Forth, you have to write : <somename> <somewords> ; ...
<somename> ... instead of just <somewords>. We have a heated discussion
in another thread, where Anton complains about "pseudo-factors", so
there's even some point in Forth where factoring starts becoming
debatable. The point behind the debate seems to be that the pseudo-
factor breaks an instruction apart. But something like I <lit> + @ is
just one instruction on x86, as well, and nobody complains about that.

But again to anecdotal evidence: A spaghetti-code writing coworker once
said to me that "Forth is unreadable, because it's calling functions
which call functions". His code rarely nested at all. Another, more
recent coworker replaced a very complicated and unmaintainable Verilog
state machine with a much simpler, table driven state machine, but the
"much simpler" bogged down to 1500 lines of code. He had code inside
which essentially did a find first one in a bit vector, and instead of
doing that with a loop or a recursive partitioning scheme, he just wrote
32 lines, each of them checking one bit. Explanation: It's more
readable that way. There are just many more weak programmers than
strong programmers.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

John Passaniti

unread,
Jun 4, 2010, 1:17:53 PM6/4/10
to
On Jun 4, 11:40 am, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:

> >     if (cmd[0] == 'f' &&
> >         cmd[1] == 'o' &&
> >         cmd[2] == 'o' &&
> >         cmd[3] == 0) do_something();
> >     else if (cmd[0] == 'b' &&
> >              cmd[1] == 'a' &&
> >              cmd[2] == 'r' &&
> >              cmd[3] == 0) do_another_thing();
> >     else ...
>
> Obviously, this was for a time critical piece of code, yes?

No. It's a comamnd processor taking input at human speed. There was
absolutely no sensible justification for avoiding the *basic* tools
provided by the language.

> > It was clear he
> > didn't fully understand the duality of pointers and arrays,
>
> None of the self-appointed C experts on comp.lang.c do either...  If the few
> top guys there don't, why would you expect that some average "Joe" would?
> All he had to do was study answers from the test archive of his Fraternity
> to get his CS degree.  IMO, the only true C expert _still_ posting to NG's
> that does is Douglas Gwyn.

I would expect it because the vast majority of the engineers I've
worked with *are* competent, and *do* understand C. They don't write
crap code. They understand the language. They understand the
machine. And they don't "dumb down" what they write to the lowest
common denominator. The often do what I do-- instead of dumbing down
my work, I help others become more proficient in their understanding.

I fully get there are lots of terrible programmers out there in the
world. Your previous comments about debugging and factoring in C put
you firmly in that camp, and so it's unsurprising you'll go out of
your way to justify mediocrity.

> > The source code size went down dramatically.  The object code size was
> > reduced significantly.  And I have factored all of the command
> > dispatchers into "execute".  Sounds good, right?  Well, no.  I was
> > told I was "too clever" and that this code was harder to understand.
>
> ROFL...  I suspect you're fibbing, but it was funny anyway.  Yes, some
> places do prefer to hire the lowest cost employees they get.  So, it has a
> very small hint of truth to it.

In this case, it wasn't the lowest-cost employee. It was someone who
had skills in assembly language, and nominal skills in C. The company
simply didn't have anyone on staff that could give an honest critique
of the quality of the code. All they knew is that their products had
a lot of bugs in them, which wasn't surprising given what they had to
work with.

> > And I guess, in a sense, they were right.  If you are a terrible C
> > programmer who doesn't understand the basics, yes, this is more
> > complicated.
>
> Curious, did you ever time the old and new?

Yes. In the specific example I provided, my worst case was on the
order of a few milliseconds longer. But as it was a command processor
running at human speed, the difference was negligible. For the
overall program on the many other changes I made (such as using
pointers instead of array indices), my code was hugely faster. And
the vast reduction in code size meant that more of the program was
able to be held in the instruction cache, which had other measurable
benefits.

> So, tell us John, does the "nonsense" version actually run faster?  You
> seemed to have intentionally left that part out of your post and criticized
> other insignificant aspects... e.g., source code size, object code size,
> clarity, cleverness, etc.

Yes, because speed isn't always the dominate factor in an
application. What an idiot does is always assume that speed is the
number one issue and wrap all their concerns around that. What a
professional does is realize that there are many possible concerns--
including different concerns in different parts of the application--
and prioritize them by what is most important. In the systems I work
(typically embedded systems dealing with real-time digital audio
processing), there are usually multiple concerns. There will be parts
of the application for which if they aren't as fast as possible, the
system will fail. And there will be parts where speed makes
absolutely no difference, and other factors (like size or
maintainability) take center stage. Knowing where to focus your
efforts and how is part of being a professional.

Paul Rubin

unread,
Jun 4, 2010, 1:30:02 PM6/4/10
to
"The Beez'" <han...@bigfoot.com> writes:
> I know of programmers that aren't even aware of function call overhead
> and they make up the majority. They produce that horrible code. And at
> job interviews it's a bad attitude to look for copies of yourself,

Compilers these days will often inline automatically, or on getting
optimization hints (e.g. __inline__ in gcc) from the programmer.

Andrew Haley

unread,
Jun 4, 2010, 1:56:46 PM6/4/10
to
John Passaniti <john.pa...@gmail.com> wrote:
> On Jun 4, 11:01?am, Doug Hoffman <glide...@gmail.com> wrote:
>> Thank you for pointing out what I also believe to be true. ?That is,
>> it is possible to over-factor. ?Further, I find overly factored code
>> can be less readable/maintainable. ?It also can lead to naming issues
>> (too many names to create and remember). ?Note that this is *not* to
>> say that I don't believe in factoring. ?I believe in appropriate

>> factoring, which admittedly is a bit hard to define and I'll not
>> attempt it.
>
> The goal with factoring should be to find repeated sequences of code
> and give them meaningful names.

I don't think so. It's often worth giving something a name even if
it's only used once: doing do can make code much easier to read.

> I guess it's possible to over-factor. What is far more likely isn't
> that the code has too much factoring, but terrible names for the
> factors. Or, there is what I might call "anticipatory factoring"
> where the programmer thinks that some unique code sequence deserves
> a name because it may be used again, but never is.

All true.

Andrew.

John Passaniti

unread,
Jun 4, 2010, 4:50:26 PM6/4/10
to
On Jun 4, 1:04 pm, "The Beez'" <hans...@bigfoot.com> wrote:
> Well, you have to acknowledge there is more to life than programming
> Coca Cola vending machines.

Except when that is your job.

> Yes, there are many reasons to factor and
> usually I do. When speed is not a great issue. Even in Forth calling a
> word takes a performance hit otherwise ">R BRANCH R> BRANCH" would
> take no time whatsoever. If speed is an issue then function call

> overhead IS an issue, [...]

Several years ago, I was asked to write a coding standards document
for the company I worked for. The very first rule-- what I called
Rule Zero-- was that you should follow each and every following rule
exactly... except when it didn't make sense to do so. This discussion
about factoring falls into that realm. I don't see anyone saying
(well, anyone credible) that factoring is something you should do at
all times, without question. There are exceptions, and it's useful to
talk about those exceptions. But don't let the exceptions drive the
larger discussion. The times when someone shouldn't factor because of
a speed issue are for the vast majority of people going to be minority
cases. Know them, discuss them, but don't pretend that an exception
invalidates the rule.

> I know of programmers that aren't even aware of function call overhead
> and they make up the majority. They produce that horrible code. And at
> job interviews it's a bad attitude to look for copies of yourself,
> BTW.

Correct. When I'm conducting job interviews, I am often looking for
people who are *better* than myself, or those who can bring fresh
perspectives on old problems. I want people who are not just
competent, but who can teach me (and the company) something as well.
I'm looking for someone who will argue a design with me, tell me I'm
wrong, and then show me why. A clone of myself? Where would the
creative friction come from?

Regardless, basic competency is at the core. If a candidate tells me
they know C, but they struggle with pointers, then they aren't a C
programmer any more than if they claimed to be a Forth programmer but
didn't understand manipulating data on the stack. I don't see why a
compromise in basic competency is beneficial or necessary.

The Beez'

unread,
Jun 4, 2010, 5:09:47 PM6/4/10
to
On Jun 4, 10:50 pm, John Passaniti <john.passan...@gmail.com> wrote:
> > Well, you have to acknowledge there is more to life than programming
> > Coca Cola vending machines.
> Except when that is your job.
Even when it's your job to do so, you have to acknowledge it. If not,
there is no reason for other to acknowledge there is such a job as
programming vending machine. Why should they offer you that courtesy
if you're unable to do it?

> Several years ago, I was asked to write a coding standards document
> for the company I worked for.  The very first rule-- what I called

> But don't let the exceptions drive the
> larger discussion.  The times when someone shouldn't factor because of
> a speed issue are for the vast majority of people going to be minority
> cases.  Know them, discuss them, but don't pretend that an exception
> invalidates the rule.

Depends on the number of exceptions. I don't like to live by rules,
instead consider each situation as it is and find the best solution
for it. If you can cope with a problem by simply applying a rule, is
it really worth doing it?

> Correct.  When I'm conducting job interviews, I am often looking for
> people who are *better* than myself, or those who can bring fresh
> perspectives on old problems.  I want people who are not just
> competent, but who can teach me (and the company) something as well.
> I'm looking for someone who will argue a design with me, tell me I'm
> wrong, and then show me why.  A clone of myself?  Where would the
> creative friction come from?

The willingness to learn requires a certain humility in my experience
and openness to new and other realms of knowledge and science. Like
biology, philosophy, chemistry, physics, etc. Most IT people lack
that. This is interesting.

Hans Bezemer

The Beez'

unread,
Jun 4, 2010, 5:11:26 PM6/4/10
to
On Jun 4, 7:30 pm, Paul Rubin <no.em...@nospam.invalid> wrote:
> Compilers these days will often inline automatically, or on getting
> optimization hints (e.g. __inline__ in gcc) from the programmer.
Won't help if you need pointers to functions, like in a VM.

Hans

Rod Pemberton

unread,
Jun 4, 2010, 6:14:33 PM6/4/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:b38177a8-7d75-45f2...@q12g2000yqj.googlegroups.com...

>
> One of my roles has been to conduct technical interviews for job
> candidates. And one of the questions I would often ask candidates is
> how they would debug a system where they couldn't use something like
> printf. If they couldn't answer the question or if they (like you)
> confidently proclaimed that printf was the only tool anyone needed,
> they didn't last long in the interview.
>

Well, that just demonstrates your personal bias. I worked for a few years
on a 5MLoc PL/1 application, a real-time OLTP (online transaction
processing) program. It ran as multiple processes. It was fed by 14
(fourteen) T1's at full capacity. And, as I stated, I only needed to use a
debugger once. That happened to be because I couldn't track the data across
the processes. That was handled by another program for which we had no
rights or code. Fortunately, the debugger could halt processes and trap
events. But, could it have been located without the debugger? I believe
so.

> There are a wide number of
> both standard and creative answers, from logic probes and
> oscilloscopes on port pins, to taking over LEDs on the system, to
> keeping a circular buffer with diagnostic messages in memory for post-
> mortem review.

I did something similar to detect data loss. I determined we were losing
stock quotes. The Multicast packets were timing out from Nasdaq before they
reached our machines even though we had duplicated T1's per line.
Unfortunately, corporate political BS at the time buried the issue...

Unfortunately, this company was like most companies and couldn't figure out
who the critical employees, brains, and money makers were since they were
not highly ranked on the "org chart". Primary result: layoffs. Secondary
result: their stock is at the same price it was in 1999...

> And of course there are dedicated debuggers-- in-
> circuit emulators, JTAG/BDM tools, logic analyzers, and so on.

Yeah, whatever, I worked in the electronics industry for many years too. I
wouldn't remotely expect any programmer to know what an oscilloscope, DSP,
or other non-programming related tools. It's not part of their job
description to lay cement either... An EE may design a circuit, but take a
board that isn't working properly. The EE who designed will spend days
trying to find what's wrong. I've even seen more than one of them fail at
this task, repeatedly. Remember, they *designed* the circuit. It's like a
programmer who doesn't understand what he/she coded. I must admit I don't
always remember what I've written or why, but I can work through and then
usually recall. Now, give that same faulty board to the electronic
technician who fixes them on a daily basis, he'll have it fixed in 3 minutes
flat. Experience and exposure matter. Practice makes perfect. You've
stated so previously, yet you're now claiming that a programmer who "doesn't
think outside the box" or hasn't had enough exposure is in bad shape in your
interview... Who's the moron if not you? I.e., you reject a potentially
skilled and/or smart person by some arbitrary criteria.

The contradictory claims you consistently present across your posts are just
astounding to me. It's like your beliefs or responses change on a per issue
basis. There is no comprehensive whole.

> > I've only used a debugger to find a bug once in my life and it was for a
> > very large (5MLoc), multiple process, PL/1 program.

> > The
> > typical C programmer hasn't the slightest idea what the purpose of
> > factoring is.
>
> > That's true. And, it usually makes no sense, or very little, to use
> > factoring for C. "Factoring" as applied to C will result in large
> > numbers of procedures. Every time you create a procedure,
> > you have to store, pass, read-in parameters, allocate new
> > space on a the stack for local variables, etc. I.e., overhead.
> > For C, the goal is to minimize the data copying and
> > allocation/deallocation necessary to maintain variables. C
> > compilers don't automatically inline the procedures during
> > their optimization, which would eliminate this overhead. To
> > do so, the programmer has to go through and
> > code compiler specific directives to inline the procedures...
>
> Oh. My. God.
>
> I knew people like you existed. After all, part of my career has been
> cleaning up the total shit that people with your attitude generate.

I don't doubt that you hate spending your time, but enjoy the pay of,
cleaning up other's messes. That's probably the only thing I'd hire you
for, because it just irks you so much. I know you'd sit at your desk like a
good little worker and complete the job someday. You'd then come to me for
praise. And, I'd say "Good job!" And, you'd be happy, because your
personal life is devoid of anything. You wouldn't be like the contracters
I've seen with $1K USD shoes, gold threaded shirts, who read Barron's for
half the day before they get to work... Would you?

Now, to say that "my attitude" - whatever that means - somehow contributes
to the problems you've seen is just total BS...

> But it's rare to find someone who so proudly displays his lack of
> sophistication and experience as if it was a badge of honor.

> ...


> Your unfounded (and I'll bet unmeasured) fears about the overhead of a
> function call in C are not an excuse for you to be a terrible
> programmer.
>

Illogical, baseless, utterly factless... Claims like these are made
everyday on comp.lang.c by people who don't understand how C is converted
into assembly. That's most of them, including the self-appointed experts.
I know how much time x86 instructions generated by the C compilers I use
consume. It takes time. Not much, but more than not. That was the entire
basis and justification for the list of true statements.

> There are many reasons to factor in *every* language. Reduction of
> redundancy. Reduction in code size. Enhanced maintainability.
> Clearer code. Easier extensibility of code. Every last reason why
> Forth programmers advocate factoring applies to *every* language.
>

Sure, "cut-n-paste" is only appropriate in time critical applications,
e.g., the over 500 *identical* if() statements that were in the PL/1 program
I told you about. No joke. They determined that the calling overhead of
putting that if() into a procedure was too much. Programmers before me were
paid very well to thoroughly test this. Lots of money depended on it being
fast. But, that doesn't justify creating excessive levels of procedure
nesting in non-specialized situations. E.g., what if instead of using the
logical operators for bit-wise and, your employee devoted to factoring
creates a routine for bit-wise NAND and rewrites the code for all the other
bitwise operators to use bitwise NAND. Don't you think that's a bit
excessive? Yet that is exactly what you're expounding. Your telling me he
should use getc() and putc() because they are the most "factored", even
though the C library can read 512B or 4KB or "XYZ" KB at a time from some
device. Yet, you also say that for your example, that he/she should've used
strcmp() instead of comparison character by character, i.e., the latter is
"factored"... What's with the contradictions? Reducing the code to the
minimum by factoring just makes no sense in many situations for C. Do I
support reducing code to a near minimum? Yes, I sure do. But, it's for a
specific situation: bootstrapping onto a new OS. Most people are fine if
they use the C libraries available to them.


Rod Pemberton

Rod Pemberton

unread,
Jun 4, 2010, 6:14:54 PM6/4/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:15bdde8f-6f31-409f...@k39g2000yqb.googlegroups.com...

> I fully get there are lots of terrible programmers out there in the
> world. Your previous comments about debugging and factoring in C put
> you firmly in that camp,

What? There is no possible room for competent programmers who don't need a
debugger in your world? To me, use of debuggers is a strong indicator that
that programmer can't program. I've never seen a skilled programmer use
one. I'm sure they have. But, I've seen many lousy ones use it. They use
it all the time. They "live" there. It's the lousy ones who use a debugger
as a crutch to help them correct their lousy code.

So, I'm like OMG! ROFL for being in the "lousy camp" as deemed by the
mighty JP... All praise JP.

> > Curious, did you ever time the old and new?
>
> Yes. In the specific example I provided, my worst case was on the
> order of a few milliseconds longer. But as it was a command processor
> running at human speed, the difference was negligible. For the
> overall program on the many other changes I made (such as using
> pointers instead of array indices), my code was hugely faster. And
> the vast reduction in code size meant that more of the program was
> able to be held in the instruction cache, which had other measurable
> benefits.
>

If that was the better solution, when you were told you "were too clever"
and actually made the code harder to understand, did you back down and
revert to the original? Or, did you insist they clean out the "dead wood"
programmers next to you? Or, did you attempt to prove to your idiot boss
this way actually is better? It should be better for when your company can
afford to hire competent programmers. My guess is that you backed down and
reverted... I.e., you've allowed inefficiency, poor choices, poor coding,
bad implementation, ignorance, unskilled employees, etc. to continue to
exist at your company and affect it's products and eventually your pay
and/or employment. How do you expect your company to improve if you've
contributed to *not improving* it like everyone else there? Wasn't there an
"empower the employee" revolution in the 1990's that eliminated this
attitude? What happened to the Kaizen movement? Did it bypass your
company? ISO 9000? It too?


Rod Pemberton


Bernd Paysan

unread,
Jun 4, 2010, 6:30:15 PM6/4/10
to
Rod Pemberton wrote:
> Wasn't there an
> "empower the employee" revolution in the 1990's that eliminated this
> attitude? What happened to the Kaizen movement? Did it bypass your
> company? ISO 9000? It too?

I suppose all that happened to his company. On paper. That's how it's
usually happens. The HR department tells you "We are open to
improvement". But the people who actually have to judge the suggestions
are not.

Doug Hoffman

unread,
Jun 5, 2010, 9:37:04 AM6/5/10
to
On Jun 4, 12:47 pm, John Passaniti <john.passan...@gmail.com> wrote:


> The goal with factoring should be to find repeated sequences of code
> and give them meaningful names.

Typically true, but not always IMO. From Thinking Forth: "Moore: The
particular phrase OVER + SWAP is one that's right on the margin of
being a useful word. If you name such a phrase, you have trouble
knowing exactly what RANGE does. You can't see the manipulation in
your mind. OVER + SWAP has greater mnemonic value than RANGE."

A mechanistic pattern matching approach might suggest that a Forth
source editor be programmed to look for any and all repeating
patterns. I have never seen such an editor (though some may exist) or
a reference to this approach.


> An explosion of names can be hard to
> manage if those names aren't meaningful or collected into a single
> huge namespace.

Yes.

> A large part of what makes object orientation popular
> is how it manages definitions by effectively making the object a
> namespace. Having a definition called "bounce" in a sea of a thousand
> other definitions is indeed difficult because it could mean anything.
> Having that same definition bound to an object that represents a ball
> in a game or a command in an email processor has a crisper meaning and
> is thus easier to keep in one's mind.

Well stated. This is one of the not-so-obvious benefits of using
objects.


> I guess it's possible to over-factor. What is far more likely isn't
> that the code has too much factoring, but terrible names for the
> factors.

Agreed.

-Doug

Elizabeth D Rather

unread,
Jun 5, 2010, 2:48:37 PM6/5/10
to
On 6/4/10 12:30 PM, Bernd Paysan wrote:
> Rod Pemberton wrote:
>> Wasn't there an
>> "empower the employee" revolution in the 1990's that eliminated this
>> attitude? What happened to the Kaizen movement? Did it bypass your
>> company? ISO 9000? It too?
>
> I suppose all that happened to his company. On paper. That's how it's
> usually happens. The HR department tells you "We are open to
> improvement". But the people who actually have to judge the suggestions
> are not.


That reminds me of the Eastman Kodak company. In the 80's we worked on
a series of quality-control instruments that checked various products
(film, photographic paper, ...) for defects. At one point the company
instituted a "zero defects" program. We were instructed to change all
of the programs to report "features" instead of "defects". The program
was a big success.

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Hugh Aguilar

unread,
Jun 5, 2010, 3:49:04 PM6/5/10
to
On Jun 4, 4:14 pm, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:
> "John Passaniti" <john.passan...@gmail.com> wrote in message

>
> news:15bdde8f-6f31-409f...@k39g2000yqb.googlegroups.com...
>
> > I fully get there are lots of terrible programmers out there in the
> > world.  Your previous comments about debugging and factoring in C put
> > you firmly in that camp,
>
> What?  There is no possible room for competent programmers who don't need a
> debugger in your world?  To me, use of debuggers is a strong indicator that
> that programmer can't program.  I've never seen a skilled programmer use
> one.  I'm sure they have.  But, I've seen many lousy ones use it.  They use
> it all the time.  They "live" there.  It's the lousy ones who use a debugger
> as a crutch to help them correct their lousy code.

I'm with you in, in that I consider debuggers to be the hallmark of
bad programmer. This is pretty much the same point that I was making
over in the thread regarding watchpoints. On the other hand though, I
hesitate to say "never" --- there aren't any absolute rules.

> So, I'm like OMG!  ROFL for being in the "lousy camp" as deemed by the
> mighty JP...  All praise JP.

You're allowing a troll to get your goat! lol The problem with c.l.f.
is not that we have so many trolls; the problem is that we have so
many people who respond to trolls.

BTW, I don't know what ROFL means. Is there a dictionary of internet
acronyms somewhere that lists these?

> > > Curious, did you ever time the old and new?
>
> > Yes.  In the specific example I provided, my worst case was on the
> > order of a few milliseconds longer.  But as it was a command processor
> > running at human speed, the difference was negligible.  For the
> > overall program on the many other changes I made (such as using
> > pointers instead of array indices), my code was hugely faster.  And
> > the vast reduction in code size meant that more of the program was
> > able to be held in the instruction cache, which had other measurable
> > benefits.

Getting code to work is a lot more important than speed issues. When I
was promoting factoring, and test-as-you-go, this was all about
getting code to work.

> If that was the better solution, when you were told you "were too clever"
> and actually made the code harder to understand, did you back down and
> revert to the original?  Or, did you insist they clean out the "dead wood"
> programmers next to you?  Or, did you attempt to prove to your idiot boss
> this way actually is better?  It should be better for when your company can
> afford to hire competent programmers.  My guess is that you backed down and
> reverted...  I.e., you've allowed inefficiency, poor choices, poor coding,
> bad implementation, ignorance, unskilled employees, etc. to continue to
> exist at your company and affect it's products and eventually your pay
> and/or employment.  How do you expect your company to improve if you've
> contributed to *not improving* it like everyone else there?  Wasn't there an
> "empower the employee" revolution in the 1990's that eliminated this
> attitude?  What happened to the Kaizen movement?  Did it bypass your
> company?  ISO 9000?  It too?

I never back down! Of course, that is also a big part of why I'm
unemployed.

Ultimately, the people who thrive as employees are the ones who kiss
the boss's butt. Technical competence is pretty much a non-issue.

Programs written by hobbyists are often technical marvels. You never
see anything like this in the workplace. What you see in the workplace
is generally pretty dull and uninspired coding. I've worked in Forth,
C/C++ and various assembly languages, and it is all the same
everywhere --- dull and uninspired. When I worked as a Forth
programmer at Testra, that MFX compiler I wrote was a technical
marvel. The slide-rule program I recently wrote isn't all that
marvelous; it is pretty straightforward and simple. The compiler (and
especially the assembler) really was quite something though; it is the
best program that I've ever written. During my employment though, I
*never* received a single word of praise. I was never given a raise,
and I was never given health insurance although the other employee got
it. The fact is that technical competence is a non-issue in most
companies. Kissing the boss's butt is the only thing that really
matters.

Here is a tip for anybody who wants to work at Testra. The boss, John
Hart, cares about Christianity and the Pro-Life issue. If you want to
succeed at Testra, talk up these issues. Don't think that you can just
ignore all of this foolishness as being irrelevant. Its not irrelevant
--- its your paycheck.

The Beez'

unread,
Jun 5, 2010, 4:53:35 PM6/5/10
to
On Jun 3, 10:35 pm, MarkWills <markrobertwi...@yahoo.co.uk> wrote:
> Good advice from all. Thank you very much. Regarding complexity of
> nested code, I have found that factoring at the 'point of complexity'
> into a new word helps. I don't know why, the underlying complexity has
> just moved. However, my brain finds each individual word/factor easier
> to deal with on an individual level, I think.
It's funny, but you're right. If I have to write Forth code and I'm
not quite sure how to solve things, I work top down. I keep on
postponing writing the real complexity, writing the loops and stuff
instead. Usually, when I finally get to the point where I have to
write the real McCoy, it has become so trivial I don't remember what
the real problem was in the first place. It's like peeling off the
layers of an union.

Hans Bezemer

Elizabeth D Rather

unread,
Jun 5, 2010, 5:41:16 PM6/5/10
to

Long ago, Chuck Moore advanced the theory of "conservation of
complexity": A problem (or project) has a certain inherent level of
complexity. You can't make it go away. If you dive in with a
brute-force or "quick and dirty" approach you're likely to end up with
some really messy, complicated code. However, if you invest thought and
care into your design, you can often come up with a solution that
*looks* much simpler. The complexity hasn't gone away, it's embodied in
the design effort. But it's a good investment, because the resulting
code will be much easier to debug and to maintain.

I think that's what you both are finding.

Coos Haak

unread,
Jun 5, 2010, 8:03:48 PM6/5/10
to
On Sat, 05 Jun 2010 12:49:04 -0700, Hugh Aguilar wrote:

<snip>


> BTW, I don't know what ROFL means. Is there a dictionary of internet
> acronyms somewhere that lists these?

Of course, can't you google?

http://www.acronymfinder.com

--
Coos

CHForth, 16 bit DOS applications
http://home.hccnet.nl/j.j.haak/forth.html

John Passaniti

unread,
Jun 6, 2010, 1:29:42 AM6/6/10
to
On Jun 4, 5:09 pm, "The Beez'" <hans...@bigfoot.com> wrote:
> On Jun 4, 10:50 pm, John Passaniti <john.passan...@gmail.com> wrote:> > Well, you have to acknowledge there is more to life than programming
> > > Coca Cola vending machines.
> > Except when that is your job.
>
> Even when it's your job to do so, you have to acknowledge it. If not,
> there is no reason for other to acknowledge there is such a job as
> programming vending machine. Why should they offer you that courtesy
> if you're unable to do it?

I've lost what you're trying to say here.

> Depends on the number of exceptions. I don't like to live by rules,
> instead consider each situation as it is and find the best solution
> for it. If you can cope with a problem by simply applying a rule, is
> it really worth doing it?

I'm not sure what conversation you're having. My statement was that
following rules for software makes sense as long as the rules make
sense. And I used an example where I actively told people following a
set of rules that I had written to ignore those rules if and when they
didn't make sense. So I'm not sure why you're arguing about the
application of rules when I'm saying that the programmer's intellect,
experience, and judgment should always trump static rules.

> The willingness to learn requires a certain humility in my experience
> and openness to new and other realms of knowledge and science. Like
> biology, philosophy, chemistry, physics, etc. Most IT people lack
> that. This is interesting.

It's a fundamental part of my philosophy towards everything and it
comes from not only working with some brilliant people in my career,
but also people who were far less experienced but who had unique
insights.

John Passaniti

unread,
Jun 6, 2010, 2:42:24 AM6/6/10
to
On Jun 4, 6:14 pm, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:

> > One of my roles has been to conduct technical interviews for job
> > candidates.  And one of the questions I would often ask candidates is
> > how they would debug a system where they couldn't use something like
> > printf.  If they couldn't answer the question or if they (like you)
> > confidently proclaimed that printf was the only tool anyone needed,
> > they didn't last long in the interview.
>
> Well, that just demonstrates your personal bias.  

Correct. And it is that personal bias that my employers valued when
they gave me the role of conducting interviews.

> I worked for a few years
> on a 5MLoc PL/1 application, a real-time OLTP (online transaction
> processing) program.  It ran as multiple processes.  It was fed by 14
> (fourteen) T1's at full capacity.  And, as I stated, I only needed to use a
> debugger once.  That happened to be because I couldn't track the data across
> the processes.  That was handled by another program for which we had no
> rights or code.  Fortunately, the debugger could halt processes and trap
> events.  But, could it have been located without the debugger?  I believe
> so.

I once debugged a system by licking the end of my finger and putting
it across a 48-volt rail that was controlled by a process. The
question is never if you can't do without a debugger, since clearly
you can. The question is if a debugger can make you more productive.
Do some programmers abuse debuggers? Yep. I've worked with
programmers who spent more time in a debugger than writing code, and
it was a crutch for their mediocre skills. But for someone who is
skilled, a debugger can save huge amounts of time and give insights
that you can't get with a "I got here" message dumping some state.
Necessary? No. Helpful? Yes.

> > And of course there are dedicated debuggers-- in-
> > circuit emulators, JTAG/BDM tools, logic analyzers, and so on.
>
> Yeah, whatever, I worked in the electronics industry for many years too.  I
> wouldn't remotely expect any programmer to know what an oscilloscope, DSP,
> or other non-programming related tools.  It's not part of their job
> description to lay cement either...  

Wow, shows how far behind the times you are. As an embedded systems
software engineer, I *am* expected to be competent not only in the
hardware of a system, but the software. It's a basic, fundamental,
and necessary part of the job. I don't know what "electronics
industry" experience you have, but it clearly wasn't doing the kind of
work that I and many others do. And it's also a big part of the
curriculum of every college and university that offers a computer
engineering degree (or equivalent).

> An EE may design a circuit, but take a
> board that isn't working properly.  The EE who designed will spend days
> trying to find what's wrong.  I've even seen more than one of them fail at
> this task, repeatedly.  Remember, they *designed* the circuit.  It's like a
> programmer who doesn't understand what he/she coded.  I must admit I don't
> always remember what I've written or why, but I can work through and then
> usually recall.  Now, give that same faulty board to the electronic
> technician who fixes them on a daily basis, he'll have it fixed in 3 minutes
> flat.  Experience and exposure matter.  Practice makes perfect.  You've
> stated so previously, yet you're now claiming that a programmer who "doesn't
> think outside the box" or hasn't had enough exposure is in bad shape in your
> interview...  Who's the moron if not you?  I.e., you reject a potentially
> skilled and/or smart person by some arbitrary criteria.

No, I reject skilled and smart people not by arbitrary criteria, but
by the criteria set forth by my employers. When you're looking to
hire an embedded systems software engineer, the very title should give
you a strong hint as to what is necessary. They need to understand
not just the process of designing, coding, testing, and maintaining
software, but they also need to understand embedded systems. And that
implicitly requires an understanding of hardware (or at least the
hardware appropriate for the job).

> The contradictory claims you consistently present across your posts are just
> astounding to me.  It's like your beliefs or responses change on a per issue
> basis.  There is no comprehensive whole.

When you wildly distort what I've written and when you take your own
limited experience and try to apply it outside what you know, yes, I
guess my claims are contradictory. Thankfully, I and many others are
not bound by the limits of your experience.

> I don't doubt that you hate spending your time, but enjoy the pay of,
> cleaning up other's messes.  That's probably the only thing I'd hire you
> for, because it just irks you so much.  I know you'd sit at your desk like a
> good little worker and complete the job someday.  You'd then come to me for
> praise.  And, I'd say "Good job!"  And, you'd be happy, because your
> personal life is devoid of anything.  You wouldn't be like the contracters
> I've seen with $1K USD shoes, gold threaded shirts, who read Barron's for
> half the day before they get to work...  Would you?

Nope. I go to work in sneakers and (these days) shorts. Not much
interest in Barrons, but I do like reading publications like Free
Inquiry and The Skeptical Inquirer during lunch. And I don't accept
jobs from incompetents and the self-important, because they tend to
take down companies. Where I'm working now is like most of the
companies I've worked for-- small, technically innovative, and driven
by engineering. It's the kind of companies you don't sound like you
would work out well in.

> Now, to say that "my attitude" - whatever that means - somehow contributes
> to the problems you've seen is just total BS...

I thought I was clear, but since you have reading comprehension
problems, let me rephrase. Your statements about factoring code in C
(and your justifications for it) put you in a class of programmers
who, at best, take your limited experience and criteria against which
you code (in this case that speed is everything) and ignore that in
the real world, there can be a wide range of criteria against which
code needs to perform against. Code size, extensibility,
understandability, maintainability... all these are part of the goals
of factoring in *any* language, much less C. And then we have this...

> Illogical, baseless, utterly factless...  Claims like these are made
> everyday on comp.lang.c by people who don't understand how C is converted
> into assembly.  That's most of them, including the self-appointed experts.
> I know how much time x86 instructions generated by the C compilers I use
> consume.  It takes time.  Not much, but more than not.  That was the entire
> basis and justification for the list of true statements.

And I know the cost of every compiled C statement across the many 8,
16, 32, and 64-bit microprocessors and microcontrollers that I write
code for. Your statements were true-- that there is an overhead in
making a function call. That isn't in question. The question is if
that overhead matters, relative to the benefits of factoring code.
That requires the programmer actually *think* about the problem, and
make a determination about the value of what they do. I'm sorry if
such thoughtful programming is offensive to you, but the mindless
programming style you espouse-- where you reflexively don't factor
because of that overhead-- is not something I see of value.

> Sure, "cut-n-paste" is only appropriate in time critical applications,
> e.g., the over 500 *identical* if() statements that were in the PL/1 program
> I told you about.  No joke.  They determined that the calling overhead of
> putting that if() into a procedure was too much.

For that specific application, sure. I'm not sure why you take the
experience for a single application and apply it generically to
everything.

> E.g., what if instead of using the
> logical operators for bit-wise and, your employee devoted to factoring
> creates a routine for bit-wise NAND and rewrites the code for all the other
> bitwise operators to use bitwise NAND.  Don't you think that's a bit
> excessive?  

Yes. In this straw-man example, you've correctly identified a
situation that doesn't happen in the real-world. Would you like to
come up with more bullshit?

> Yet that is exactly what you're expounding.  

No. Show me the message where I espouse factoring basic operators. I
merely said factoring is far more often than not a good thing in any
language. You've inflated my arguments beyond all reason, probably
because every time you emit nonsense, you know you can't argue the
substance of what I wrote, so you invent straw-man arguments to fire
against. Like this one:

> Your telling me he
> should use getc() and putc() because they are the most "factored", even
> though the C library can read 512B or 4KB or "XYZ" KB at a time from some
> device.  

Now you're completely inventing things I never said, coupled with an
unintelligible argument against buffering.

> Yet, you also say that for your example, that he/she should've used
> strcmp() instead of comparison character by character, i.e., the latter is
> "factored"...  

No, he should have used strcmp because (1) the language provides it,
(2) the routine was already being brought in by other code, so there
was no additional cost in using it, (3) it generated less code, which
in this product was a huge concern (they were talking about increasing
memory and driving the cost of the product up), (4) it made the code
far more readable, (5) it eliminated a class of errors (namely
incorrect subscripts) that they had in the production code.

> What's with the contradictions?  

The only contradiction comes from your not understanding what I'm
writing, combined with the straw-man arguments you're creating.

> Reducing the code to the
> minimum by factoring just makes no sense in many situations for C.  

In my experience, you are dead wrong. Your singular focus on speed is
idiotic.

Elizabeth D Rather

unread,
Jun 6, 2010, 3:12:08 AM6/6/10
to
On 6/5/10 8:42 PM, John Passaniti wrote:
> On Jun 4, 6:14 pm, "Rod Pemberton"<do_not_h...@notreplytome.cmm>
...

>> Yeah, whatever, I worked in the electronics industry for many years too. I
>> wouldn't remotely expect any programmer to know what an oscilloscope, DSP,
>> or other non-programming related tools. It's not part of their job
>> description to lay cement either...
>
> Wow, shows how far behind the times you are. As an embedded systems
> software engineer, I *am* expected to be competent not only in the
> hardware of a system, but the software. It's a basic, fundamental,
> and necessary part of the job. I don't know what "electronics
> industry" experience you have, but it clearly wasn't doing the kind of
> work that I and many others do. And it's also a big part of the
> curriculum of every college and university that offers a computer
> engineering degree (or equivalent).

Hilarious. For 30 years it's been the case that customers present our
engineers with a piece of newly-built gear that they're supposed to
write firmware for, and do you know what fraction of the time it works
as designed/documented? 0. If our engineers didn't know how to read
circuit diagrams, use things like oscilloscopes & logic analyzers, etc.,
to debug the hardware and software *together* we'd never get anything to
work. John is 100% right here.

John Passaniti

unread,
Jun 6, 2010, 4:14:29 AM6/6/10
to
On Jun 4, 6:14 pm, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:

> > I fully get there are lots of terrible programmers out there in the
> > world.  Your previous comments about debugging and factoring in C put
> > you firmly in that camp,
>
> What?  There is no possible room for competent programmers who don't need a
> debugger in your world?  To me, use of debuggers is a strong indicator that
> that programmer can't program.  I've never seen a skilled programmer use
> one.  I'm sure they have.  But, I've seen many lousy ones use it.  They use
> it all the time.  They "live" there.  It's the lousy ones who use a debugger
> as a crutch to help them correct their lousy code.

I know a carpenter who bought a laser line. It's a device that spins
around and marks a plane with a beam of light. The carpenter has been
building large structures for years, and never needed such a device in
the past. But he uses it because it makes his job easier, faster, and
more accurate. If the device failed, it wouldn't stop him at all-- he
would just rely on older methods. Now, there are other carpenters who
have far less skill and experience who buy such devices because they
can't figure out how to mark an accurate level plane across a large
structure. They'll use such a device-- sometimes incorrectly. And
they'll get some benefit from it, but it really just masks their lack
of skill in using more basic tools.

Most reasonable people understand that the problem isn't the tool, but
the person who uses the tool. The fact the carpenter I know doesn't
need the laser line but uses one doesn't mean he lacks skill. It
means he has a tool that works well for him. The fact there are
people who are less-skilled who use it as a crutch says *nothing*
about him.

The majority of the time I spend debugging isn't using a debugger or
inserting printf statements. It's spent looking at the code, and
carefully considering where the bug comes from. When possible, I'll
often sprinkle print statements into the code, but it isn't always
possible (and I gave examples of when it isn't). Sometimes that
helps, but in the world of embedded systems, there are many times when
it can't. These days, most microprocessors and microcontrollers have
built-in hardware to assist debugging and trivial hardware-- usually
the same hardware used to download code into the system-- can give you
access to that. It makes zero sense to me to *not* use such a
debugger in such cases. The fact that some people use it as a crutch
doesn't say anything about my use of it. With a debugger, I can get
the effect of a printf without modifying the code, allowing me to
avoid heisenbugs. With a debugger, I can step in and change state
(something printf can't do) interactively and test a hypothesis about
a bug. With some debuggers, I can view complex multi-process bugs in
a way that would be impossible with printf.

And, quite often, the very same interfaces that allows debuggers to
work allow tools like profilers and coverage analyzers to work. And
ironically, you preoccupation with speed as the single criteria by
which code is judged is about doing real-world measurement, and it is
the very same debugger interfaces that make such measurement easy.

> So, I'm like OMG!  ROFL for being in the "lousy camp" as deemed by the
> mighty JP...  All praise JP.

You're being in the lousy camp is more a function of your statements
about factoring in C and your singular preoccupation with speed. Your
statements about debuggers are more about your limited experience.

> If that was the better solution, when you were told you "were too clever"
> and actually made the code harder to understand, did you back down and
> revert to the original?  

No, I did not back down. I was told by the programmer that I was "too
clever," but nobody else thought so. They enjoyed that I reduced the
code size substantially (which meant that they didn't have to add
memory and increase the cost of the system), and they enjoyed that I
eliminated an entire class of bugs (incorrect subscripts) from the
command processor.

> Or, did you insist they clean out the "dead wood"
> programmers next to you?  

I didn't need to, and it wasn't my place to make such a suggestion.
The company didn't fire him, they moved him to a different position
where his skills (which were more on the hardware side) made sense.

> Or, did you attempt to prove to your idiot boss
> this way actually is better?  

There was no need.

> It should be better for when your company can
> afford to hire competent programmers.  My guess is that you backed down and
> reverted...  I.e., you've allowed inefficiency, poor choices, poor coding,
> bad implementation, ignorance, unskilled employees, etc. to continue to
> exist at your company and affect it's products and eventually your pay
> and/or employment.  

You guessed wrong, but I'm fascinated by your going out on a limb and
making these predictions.

> How do you expect your company to improve if you've
> contributed to *not improving* it like everyone else there?  Wasn't there an
> "empower the employee" revolution in the 1990's that eliminated this
> attitude?  What happened to the Kaizen movement?  Did it bypass your
> company?  ISO 9000?  It too?

Again, you're wildly off the mark. The improvements I made had a
direct effect not only on the company's bottom line, but on the number
of customer support calls due to bugs. But thanks for playing along,
and you're more than welcome to continue generating a false history of
my career if you like. I guess it's the best you're able to do.

John Passaniti

unread,
Jun 6, 2010, 4:35:14 AM6/6/10
to
On Jun 4, 6:30 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> I suppose all that happened to his company.  On paper.  That's how it's
> usually happens.  The HR department tells you "We are open to
> improvement".  But the people who actually have to judge the suggestions
> are not.

I guess it's sad that you and others have worked (or choose to work)
in companies like that. May I suggest the strategy I've used to avoid
this nonsense? The companies I've worked for are small, and driven by
their engineering departments. One thing common to most small
companies is that they can't survive long on bullshit. There simply
isn't much if any buffer to allow incompetence to persist. And in the
case of the companies I've worked for, the people making the
evaluation of suggestions for improvement have all been engineers
themselves, so it's never been hard for me to push through
improvements, especially when I've been able to make my case
objectively.

We all make choices in life. Some people make choices to work for
companies that exceed even the worst Dilbert stereotype. I've had the
opportunity in the past to work for some of the largest employers in
my area, but turned their offers down in favor of companies where I
could make a difference, where I would feel good about my
contribution, and where I could grow. Had I accepted some of the job
offers I've had in the past, it's quite possible I might today be a
little richer. I might today be in middle management, overseeing
engineering efforts I didn't understand, and doing everything I could
to not rock the boat. That's not the path I took, and at age 45 when
I retrospect on my career choices, I have zero regrets.

John Passaniti

unread,
Jun 6, 2010, 5:33:59 AM6/6/10
to
On Jun 5, 9:37 am, Doug Hoffman <glide...@gmail.com> wrote:
> A mechanistic pattern matching approach might suggest that a Forth
> source editor be programmed to look for any and all repeating
> patterns.  I have never seen such an editor (though some may exist) or
> a reference to this approach.

An obvious view of factoring is that it is a form of data
compression. And in fact, some data compression algorithms-- the
dictionary coders like LZW and LZ7x-- work by finding repeated strings
and replacing them with a reference to that original string. So it
isn't too much of a stretch of the imagination that one could do the
same with Forth code. Doing it at the source level would be difficult
because the programmer can invent arbitrary syntax. But the compiled
form of that code is somewhat easier to deal with. There, considering
compiled definitions as strings of whatever the threading model uses--
addresses, call, byte-codes-- it should be possible to automatically
find and create (unnamed) factors. There would be lots of things to
get in the way-- embedded branch addresses, literals, etc. But the
basic idea seems doable. You would want some controls over this--
such as only auto-factor when the code sequence is more than some
threshold. And I'd say a default should be to only auto-factor code
that occurs more than two or three times. If a factor appears more
than that, then it should probably be presented to the programmer as a
warning that they might want to factor it themselves, since that
factor might be more likely to be something the programmer should
extract and name themselves.

John Passaniti

unread,
Jun 6, 2010, 5:56:49 AM6/6/10
to
On Jun 5, 3:49 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> I never back down! Of course, that is also a big part of why I'm
> unemployed.

No, I'm guessing that's not the reason. Your behavior here in
comp.lang.forth suggests other reasons. When I know I'm right, I
don't back down from my positions, and I *prove* my case objectively.
Your performance in this newsgroup suggests that proving your case
with objective measurement is something you're either uncomfortable
with or unable to do.


Rod Pemberton

unread,
Jun 6, 2010, 2:56:50 PM6/6/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:2dd4efd8-ceb1-4bdf...@d37g2000yqm.googlegroups.com...

> The companies I've worked for are small, and driven by
> their engineering departments.

Been there. Done that.

Small companies are always trying to outsource their manufacturing. They
always saw it as a cost burden. They never understood that was usually a
critical component of their success: quality control and profit center.

> One thing common to most small
> companies is that they can't survive long on bullshit.

They can't survive long if they aren't selling product for a profit. BS can
survive forever, esp. if that's exactly what the customer wants. I saw
numerous products where I'd ask myself, "What idiot would buy that?" Yet,
they sold them by the thousands. They were profitable. Competitors
products were of much poorer quality. We sold what the customer wanted. It
worked. It lasted. Yes, IMO, most people in the markets that the small
companies sold too weren't that bright. But, they still wanted things, they
wanted them made a certain way, they wanted them to do specific things, and
they had money to buy them.

> There simply
> isn't much if any buffer to allow incompetence to persist.

That assumes they know what "incompetence" is or what leads to it. That's a
huge assumption. E.g., one small company I worked for took on a project
that was too large for them. They had the manufacturing to handle it. The
"important people" in the engineering staff believed it was a larger but
doable project, i.e., reasonable. Some of the engineers wanted to create a
smaller product, as they had always done. In reality, the engineering staff
was actually very under experienced for the challenge. They were using
technology for smaller electronic devices and attempted to scale that up to
the larger product. This was a really, really poor design choice, made by
some very bright people. Then, due to development delays resulting from
that poor choice, they missed a critical unknowable, "invisible" market
deadline. They could've been the first to the market, but weren't. They
missed the market. That might've allowed them to dominate the market. A
bunch of other companies developed similar, albeit substantially smaller,
simpler, and cheaper products. So, those companies came to the market
first, albeit much later than our missed deadline. Next, they took on lots
of debt in order to keep the company going until they could recover. No
sales for the new product. It was too large and expensive. There was no
smaller version. Boom! Bankruptcy. Company sold. "There simply isn't
much if any buffer" to overcome risks leading to failure, especially ones
you can't forsee.


Rod Pemberton

Rod Pemberton

unread,
Jun 6, 2010, 2:57:30 PM6/6/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:50d16906-698e-4b36...@e5g2000yqn.googlegroups.com...
> ... small, technically innovative, and driven

> by engineering. It's the kind of companies you don't sound like you
> would work out well in.

Been there. Done that.

> Your statements about factoring code in C
> (and your justifications for it) put you in a class of programmers
> who, at best, take your limited experience and criteria against which
> you code (in this case that speed is everything) and ignore that in
> the real world, there can be a wide range of criteria against which
> code needs to perform against.

Ah, like I said, you have no ability to distinguish talent from non-talent
except by placing everyone into the non-talent category by default.
When do they get to come of out the penalty box?

> The question is if
> that overhead matters, relative to the benefits of factoring code.

It does. It does in microprocessor and DSP based products still using
ancient 10Mhz controllers, as well as in RT OLTP running on expensive
miniframes.

> That requires the programmer actually *think* about the problem,

Ah, so you're using "factoring" as another arbitrary, proxy test, to
quantify one's experience... It's still arbitrary and biased. And, as I've
indicated repeatedly, it's the *wrong* solution for some types of
programming.

> > E.g., what if instead of using the
> > logical operators for bit-wise and, your employee devoted to factoring
> > creates a routine for bit-wise NAND and rewrites the code for all the
> > other bitwise operators to use bitwise NAND. Don't you think that's
> > a bit excessive?
>

> In this straw-man example, you've correctly identified a
> situation that doesn't happen in the real-world. Would you like to
> come up with more bullshit?
>

Bullshit? Strawman? Where've you been? ...

It does happen in the real-world. It was posted to comp.lang.forth many
years ago. I'll even provide the link, again. Admittedly, it doesn't
happen in the *commercial* real-world, but you didn't make *that*
claim...

Mikael Patel defined Forth's NOT, AND, OR, XOR operators
in terms of NAND. It's in the real-world and archived:
http://groups.google.com/group/comp.lang.forth/msg/5308eddbedcd558c

> Show me the message where I espouse factoring basic operators.

There is nothing limiting factoring to non-basic operators. That's the
entire point of factoring. Now I see why you're have problems with this...
You've created an artificial psychological boundary over what
can and cannot be factored. There is no such boundary.


Rod Pemberton


Bernd Paysan

unread,
Jun 6, 2010, 3:50:33 PM6/6/10
to
John Passaniti wrote:

> On Jun 4, 6:30 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
>> I suppose all that happened to his company. On paper. That's how
>> it's usually happens. The HR department tells you "We are open to
>> improvement". But the people who actually have to judge the
>> suggestions are not.
>
> I guess it's sad that you and others have worked (or choose to work)
> in companies like that. May I suggest the strategy I've used to avoid
> this nonsense? The companies I've worked for are small, and driven by
> their engineering departments.

Yes, started there, got bought by a larger company, got bought again,
got sold to another similar company, and in the meantime, most of these
small companies in my field went either the same way or downhill.

> One thing common to most small
> companies is that they can't survive long on bullshit.

Of course. But they won't survive long, anyways. Either they are
successful - then they grow big enough to start with that bullshit
internally, or they are bought, and get that bullshit from the outside.

> We all make choices in life. Some people make choices to work for
> companies that exceed even the worst Dilbert stereotype.

I haven't chosen my last three employers. The last employer I *chose*
didn't make that nonsense, but there are not many of those left, at
least not at the moment. A lot of the companies I was interested in
joining either folded down recently, or got bought, or sometimes even
both (got bought, folded down afterwards).

The company I'm working for now is in fact engineering driven, but
that's a recent development, and quite a number of people haven't
realized.

John Passaniti

unread,
Jun 6, 2010, 5:04:44 PM6/6/10
to
On Jun 6, 2:56 pm, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:

> > The companies I've worked for are small, and driven by
> > their engineering departments.
>
> Been there.  Done that.
>
> Small companies are always trying to outsource their manufacturing.  They
> always saw it as a cost burden.  They never understood that was usually a
> critical component of their success: quality control and profit center.

I'm not sure what this has to do with the conversation we were
having. But I guess it depends on the company, and probably the area
of the country you're in. It hasn't been my experience here in lovely
Rochester, New York. Nearly every company that I've worked for--
directly or indirectly-- has has their own manufacturing line. Where
I work now, the rhythmic pattern of the pick-and-place machines can be
heard from my office when it's very quiet. It's not unusual to
outsource components that are beyond the capabilities and expertise of
the company-- for example, making multi-layer PC boards or metal
enclosures. But the actual manufacturing has been all in-house. And
it may again be a case of self-selection, but when I've been in the
job market and I'm evaluating offers, I've always gravitated to these
kinds of companies.

Again, we all make choices in life. In my case, my accepting an offer
of employment has always been not just an evaluation of the salary,
but if that company reflects my values and interests, and if they do
good, honest, competent work that I can be proud to have contributed
my effort toward. The end result is that I can look back on my career
(and forward) and smile, with no regrets and no excuses. I'm fully
aware that not everyone can do this. I know engineers who have
decided to work for companies that meet every Dilbert stereotype. And
when we get together, most of the conversation is about how they hate
their jobs, how their bosses are incompetent assholes, how the company
is saturated in waste, or the endless variety of ethical lines they
have to cross.

Whenever discussions like this in comp.lang.forth come up, I just
shake my head. We get stories about companies run by incompetents,
mired in busy-work and excess, who care nothing about quality, only
profit. We get stories like the one from Hugh, where (if taken at
face value), the company wasn't just cheap, but abusive. We get
stories about year-long projects that run five years. We get stories
about people who are actually embarrassed to admit they work for a
company. We get stories about companies with bosses who apparently
fell into their jobs and don't know anything. It's endless and
relentless and depressing and not unlike watching the canonical
episode of Oprah where a spouse serially jumps from one dysfunctional
and abusive relationship to another. And while I don't like to blame
the victim, there is a component that is their own doing-- making bad
choices. And if the decision to work for a company doesn't include a
careful evaluation of the company, but only their salary and benefits,
then I hate to say it, but those people deserve everything they get.


Hugh Aguilar

unread,
Jun 7, 2010, 5:52:17 PM6/7/10
to
On Jun 4, 4:30 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Rod Pemberton wrote:
> > Wasn't there an
> > "empower the employee" revolution in the 1990's that eliminated this
> > attitude?  What happened to the Kaizen movement?  Did it bypass your
> > company?  ISO 9000?  It too?
>
> I suppose all that happened to his company.  On paper.  That's how it's
> usually happens.  The HR department tells you "We are open to
> improvement".  But the people who actually have to judge the suggestions
> are not.

When I have succeeded at a program, I wrote the program in a hobbyist
manner. What I mean by this is that I just wrote the program by myself
without any interference. Some programs, such as MFX, were pure
hobbyist, even though I got a paycheck for writing them.

There are three problems I have found with working in a group in a
company:

1.) There is a taboo against admitting that you are figuring anything
out. Everybody supposedly has *experience*, so an admission that you
are figuring out how to proceed is taken as an admission that you
don't actually have any experience. Unfortunately, if nobody is
willing to admit that they are figuring anything out, then nothing
will get figured out. There is a tendency to just charge forward
without a lot of thinking going on.

2.) There is a taboo against back-tracking. Nobody wants to admit that
they made a mistake in the design. When I'm working alone and I screw
up the design of a program (this happens quite a lot), I just discard
that code and try a different tack. This never happens in a group
setting. Every screw-up becomes written in stone and defended to the
last. Once again, this results in a tendency to just charge forward
without a lot of thinking going on (by "thinking" I mean noticing
problems).

3.) Everybody is avoiding responsibility. There is a tendency to push
every decision onto the boss so that, if it is a screw-up (this
happens quite a lot), the employee won't get blamed. Also, we have a
lot of meetings in which decisions get made "by consensus." The
general hope is that the boss isn't going to fire everybody, so there
is safety in numbers. Once again, this results in a tendency to just
charge forward without a lot of thinking going on (by "thinking" I
mean making decisions based upon reason).

This whole thread has turned into an E.R./J.P. self-congratulation
fest. By comparison, I admit that I don't know how to write software
in a group setting. I don't think it is really possible to write cool
software this way. I think that all cool software was written in the
hobbyist manner, and that this will continue to be true forever.

The group setting primarily only works for programs in which
everything has already been figured out. I worked for almost two years
as an IBM370 assembly-language programmer (before the company went out
of business due to being so far behind the technology curve). That was
pretty easy. It was mostly just rehashing the same old programs over
and over again.

My boss (at a different company) once told me in private that I was
the only employee he had who wasn't a "head bobber." Every time he
began to speak, his typical employee would begin nodding in agreement
instinctively and immediately. It was actually pretty funny to watch,
because the head bobbing would begin with the first word out of the
boss's mouth, before the employee even knew what the boss was saying.
The boss could assert that the ocean is above the sky, and the
employee would immediately begin nodding in the affirmative and
declaring: "Yes, yes, you are so right..."

I expect that this is the situation with E.R.'s novice Forth class.
The whole purpose of the novice class is to graduate the head bobbers,
and flunk out people like me. When I point out that CREATE DOES> only
allows for *one* action to be associated with the data type, E.R.
would kick me out of the class and terminate my employment. When I
point out that generated colon words execute about an order of
magnitude faster than DOES> code, E.R. would kick me out of the class
and terminate my employment. There is no way that I could ever
graduate from a class like that. I'm just not a head bobber --- there
would be no point in my even trying to pretend.

On the other hand though, I am willing to pretend to be a C/C++
programmer. I don't care about C/C++ in the slightest, so I am willing
to just bob my head up and down and focus my mind on collecting a pay-
check. I wouldn't say this so plainly on a resume, but I could imply
it by using terms like "goal directed" and "facilitate." Turning nouns
into verbs with an "ize" suffix should also work pretty well. We've
all seen resumes featuring this kind of language!

John Passaniti

unread,
Jun 8, 2010, 12:49:51 AM6/8/10
to
On Jun 7, 5:52 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> There are three problems I have found with working in a group in a
> company:

I'm going to guess that you have problems working in groups of people,
regardless of what the endeavor is.

> 1.) There is a taboo against admitting that you are figuring anything
> out. Everybody supposedly has *experience*, so an admission that you
> are figuring out how to proceed is taken as an admission that you
> don't actually have any experience. Unfortunately, if nobody is
> willing to admit that they are figuring anything out, then nothing
> will get figured out. There is a tendency to just charge forward
> without a lot of thinking going on.

Where you worked, maybe. Where I've worked, no. Part of that is
because we often were solving original problems sometimes using
untested technologies. So "figuring anything out" comes implicitly
with that territory and indeed, that's what you're paid for. Now, I
will say that if "figuring anything out" was something basic and
fundamental (like a programmer who claimed to understand a language
using their "figuring anything out" time to learn that language on the
job), then no, that's not acceptable.

> 2.) There is a taboo against back-tracking. Nobody wants to admit that
> they made a mistake in the design. When I'm working alone and I screw
> up the design of a program (this happens quite a lot), I just discard
> that code and try a different tack. This never happens in a group
> setting. Every screw-up becomes written in stone and defended to the
> last. Once again, this results in a tendency to just charge forward
> without a lot of thinking going on (by "thinking" I mean noticing
> problems).

You're right that it isn't pleasant to admit you've created a flawed
design. But the mark of a professional is admitting to and learning
from your mistakes. Today in fact, I walked into work with what I
thought was a clear and clever solution to a problem. I sat down, and
about six hours later, came to the realization that my design wasn't
going to work. I could have pretended the problem wasn't there and
probably came up with a work-around. Instead, I walked into a
coworker's office and discussed with him the process I used to come to
my flawed design. We identified were I went wrong, and now it isn't
likely that I'll make that class of mistake again. Plus, as I've
discussed my error with someone else, it isn't just a personal
learning experience. And yes, I understand there are companies where
this would be suicide; I don't work for such companies. Again, life
is a series of choices, and if you choose to work for such a company,
then I'm sorry, but you deserve what you get.

> This whole thread has turned into an E.R./J.P. self-congratulation
> fest. By comparison, I admit that I don't know how to write software
> in a group setting. I don't think it is really possible to write cool
> software this way. I think that all cool software was written in the
> hobbyist manner, and that this will continue to be true forever.

Again, my experience is exactly the opposite. There are three things
I really enjoy about writing software in a group setting:

1. When you're writing software in a group, it's usually because the
size or scope of the problem being solved is beyond one person. I
enjoy very much being part of something larger than myself. And I
enjoy working with people who have different strengths and weaknesses
than I do. It gives me the opportunity to both learn and teach.
Those who think they can't learn from others, or those who are
terrible teachers might not get much out of the experience, but I do.

2. I like having to explain my work to others. Talking out a design
itself is often a great way to expose holes in your thinking. But
when that doesn't happen, feedback from others-- especially those with
wildly different perspectives-- can help you see different, better
ways to solve a problem. But even if it's just getting confirmation
from someone that what you think will work seems reasonable, that's
valuable.

3. Sometimes big projects can feel like they're dragging on. We've
all had days when we just stared at a flashing cursor, looking for
motivation. A good group of people understands this about themselves
and others, and will be there to push you, to motivate you, or
sometimes to pick up the slack. And you in turn will do the same for
them.

On my own time, at home and alone, I'm starting to learn about
developing software for the iPhone/iPod/iPad products from Apple. And
I'm finding that it's pretty rough-- not because I can't understand
the technologies, but because the only people I have to bounce ideas
and questions off of are in support forums. The latency of responses
and lack of in-person human interaction is making learning this kind
of development slower and more boring. I've recently discovered a
group of developers locally that meets to discuss what they're doing
and to present what they've learned to others. I'm excited about that
because it gets it closer to the group interaction that I enjoy.

> The group setting primarily only works for programs in which
> everything has already been figured out. I worked for almost two years
> as an IBM370 assembly-language programmer (before the company went out
> of business due to being so far behind the technology curve). That was
> pretty easy. It was mostly just rehashing the same old programs over
> and over again.

Again, not my experience. I've never worked for a company where what
they were doing had already been figured out. At the minimum, every
project has had some unique aspect about it. The value of the group
was, again, multiple perspectives and skills, each contributing to the
whole.

> I expect that this is the situation with E.R.'s novice Forth class.
> The whole purpose of the novice class is to graduate the head bobbers,
> and flunk out people like me. When I point out that CREATE DOES> only
> allows for *one* action to be associated with the data type, E.R.
> would kick me out of the class and terminate my employment. When I
> point out that generated colon words execute about an order of
> magnitude faster than DOES> code, E.R. would kick me out of the class
> and terminate my employment. There is no way that I could ever
> graduate from a class like that. I'm just not a head bobber --- there
> would be no point in my even trying to pretend.

I kind of doubt it. Although you've only been in this newsgroup for a
relatively short time, many of us have been here for years.
Elizabeth's advice to others tends to scale with their experience and
need. If your application has a need for more than one action (and
that action can't be a dispatch mechanism), I imagine she would
confirm what you said and accept your solution if it was well-
written. If however your objection to CREATE DOES> is purely
philosophical and your actual code doesn't actually need more than one
action (as is often the case), then I would imagine she would say you
were over-designing a solution and should fall back to something
simpler.

Most of the time in this newsgroup, you're creating dragons that you
then fearlessly slay. You might consider a different tactic, since
that clearly isn't working.

> On the other hand though, I am willing to pretend to be a C/C++
> programmer. I don't care about C/C++ in the slightest, so I am willing
> to just bob my head up and down and focus my mind on collecting a pay-
> check. I wouldn't say this so plainly on a resume, but I could imply
> it by using terms like "goal directed" and "facilitate." Turning nouns
> into verbs with an "ize" suffix should also work pretty well. We've
> all seen resumes featuring this kind of language!

Hint: There is this thing called Google, and many times employers use
it to do casual background checks. Pretending to be something you're
not has another name: liar. It's not generally a character trait that
tends to lead to employment, but hey, I'm sure you know best.

Doug Hoffman

unread,
Jun 8, 2010, 1:42:15 AM6/8/10
to
On Jun 7, 5:52 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:

> There are three problems I have found with working in a group in a
> company:

[ snip ]

> 3.) Everybody is avoiding responsibility. There is a tendency to push
> every decision onto the boss so that, if it is a screw-up (this
> happens quite a lot), the employee won't get blamed.

This is a management problem. If there is fear in the workplace then
people will do very strange and unproductive things. Such as not
making decisions and spending their working hours covering their
behinds instead of doing work. This fear likely was instilled by
management via punishing employees for their "mistakes". This is old
school management. Any modern successful workplace has upper
management that fosters a corporate-wide environment devoid of fear of
making mistakes and devoid of fear in general.

I supervise a team of five engineers. They all know better than to
try to get me to make their decisions for them. They also know when
to come ask for help, but only after they have tried to help
themselves.

-Doug

Elizabeth D Rather

unread,
Jun 8, 2010, 2:54:43 AM6/8/10
to

My experience is much as John describes above. Virtually all my
programming career has been either as a member of a programming team or
managing one. I enjoy working with and managing groups of people some
of whom are much smarter and/or more knowledgable than me. For those
wonderful people, I regard my role as being to smooth their path and be
a sounding board when they need to work something out.

If Hugh were to come to me complaining that "generated colon words
execute about an order of magnitude faster than DOES> code" I would ask
him to prove it. I think it will be difficult, at least on systems I'm
familiar with, because I know exactly the instructions that are executed
in each case: Precisely the same instruction, a CALL. As far as
needing more than one behavior, there are examples in SwiftForth of how
to do that.

Fortunately, I've never been seriously tempted to throw someone out of
one of my classes.

Hugh Aguilar

unread,
Jun 8, 2010, 5:30:00 PM6/8/10
to
On Jun 8, 12:54 am, Elizabeth D Rather <erat...@forth.com> wrote:
> If Hugh were to come to me complaining that "generated colon words
> execute about an order of magnitude faster than DOES> code" I would ask
> him to prove it.  I think it will be difficult, at least on systems I'm
> familiar with, because I know exactly the instructions that are executed
> in each case:  Precisely the same instruction, a CALL.  As far as
> needing more than one behavior, there are examples in SwiftForth of how
> to do that.

http://groups.google.com/group/comp.lang.forth/browse_thread/thread/4d3bce34eac7bc8e#

In this thread I provided disassembly of a CREATE DOES> word as
compared to the same thing done with a generated colon word. I used
SwiftForth v2, which you are certainly familiar with.

One problem with CREATE DOES> is that you have *two* CALL instructions
(contrary to your statement above that there is only *one* CALL
instruction). You have one CALL into the defined word. It does another
CALL into the DOES> code. This CALL puts the body address on the
return stack. The DOES> code does an R> to move that address to the
parameter stack and then falls into the code that that user wrote.
Having a CALL that goes directly into another CALL kills the speed.

Another problem is that there are typically constant data comma'd into
the body of the datum. The DOES> code fetches this data out (using @)
in order to use it. You are fetching data out of memory at run-time.
With my system, these constant datums are compiled as literals. This
is much faster. It is also more readable, as I can use local variables
for the constant data that have meaningful names. With a CREATE DOES>
word, the user just has to remember what order the constant data were
comma'd in and extract the data correctly. This is a common source of
bugs (for me) --- using the wrong offsets to get the data out of the
body.

This is the way that a typical graduate of your novice class would
write a definer for a 3-dimensional array:

: slow-3array ( dim1 dim2 dim3 siz1 -- )
create
dup ,
>r rot r> * dup ,
rot * dup ,
* allot
does> \ x1 x2 x3 base-adr -- element-adr
>r
r@ 2 cells + @ * swap
r@ 1 cells + @ * + swap
r@ @ * + r> 3 cells + + ;

Here is an example of defining an array called TEST with SLOW-3ARRAY.
This array has dimensions 5, 7, and 9, and the element size is 1 cell.
I have provided a disassembly of the result:

5 7 9 cell slow-3array test ok
see test
47C52F 47B9A7 ( slow-3array +78 ) CALL E873F4FFFF

see slow-3array
47B92F 40DDBF ( CREATE ) CALL E88B24F9FF
47B934 4 # EBP SUB 83ED04
47B937 EBX 0 [EBP] MOV 895D00
47B93A 40828F ( , ) CALL E850C9F8FF
47B93F EBX PUSH 53
47B940 0 [EBP] EBX MOV 8B5D00
47B943 4 # EBP ADD 83C504
47B946 EBX ECX MOV 8BCB
47B948 4 [EBP] EBX MOV 8B5D04
47B94B 0 [EBP] EAX MOV 8B4500
47B94E EAX 4 [EBP] MOV 894504
47B951 ECX 0 [EBP] MOV 894D00
47B954 4 # EBP SUB 83ED04
47B957 EBX 0 [EBP] MOV 895D00
47B95A EBX POP 5B
47B95B 0 [EBP] EAX MOV 8B4500
47B95E EBX MUL F7E3
47B960 EAX EBX MOV 8BD8
47B962 4 # EBP ADD 83C504
47B965 4 # EBP SUB 83ED04
47B968 EBX 0 [EBP] MOV 895D00
47B96B 40828F ( , ) CALL E81FC9F8FF
47B970 EBX ECX MOV 8BCB
47B972 4 [EBP] EBX MOV 8B5D04
47B975 0 [EBP] EAX MOV 8B4500
47B978 EAX 4 [EBP] MOV 894504
47B97B ECX 0 [EBP] MOV 894D00
47B97E 0 [EBP] EAX MOV 8B4500
47B981 EBX MUL F7E3
47B983 EAX EBX MOV 8BD8
47B985 4 # EBP ADD 83C504
47B988 4 # EBP SUB 83ED04
47B98B EBX 0 [EBP] MOV 895D00
47B98E 40828F ( , ) CALL E8FCC8F8FF
47B993 0 [EBP] EAX MOV 8B4500
47B996 EBX MUL F7E3
47B998 EAX EBX MOV 8BD8
47B99A 4 # EBP ADD 83C504
47B99D 4081FF ( ALLOT ) CALL E85DC8F8FF
47B9A2 40C2CF ( (;CODE) ) CALL E82809F9FF
47B9A7 4 # EBP SUB 83ED04 \ entry point
47B9AA EBX 0 [EBP] MOV 895D00
47B9AD EBX POP 5B
47B9AE EBX PUSH 53
47B9AF 8 # EBX ADD 83C308
47B9B2 0 [EBX] EBX MOV 8B1B
47B9B4 0 [EBP] EAX MOV 8B4500
47B9B7 EBX MUL F7E3
47B9B9 EAX EBX MOV 8BD8
47B9BB 4 # EBP ADD 83C504
47B9BE 0 [EBP] EAX MOV 8B4500
47B9C1 EBX 0 [EBP] MOV 895D00
47B9C4 EAX EBX MOV 8BD8
47B9C6 4 # EBP SUB 83ED04
47B9C9 EBX 0 [EBP] MOV 895D00
47B9CC 0 [ESP] EBX MOV 8B1C24
47B9CF 4 # EBX ADD 83C304
47B9D2 0 [EBX] EBX MOV 8B1B
47B9D4 0 [EBP] EAX MOV 8B4500
47B9D7 EBX MUL F7E3
47B9D9 EAX EBX MOV 8BD8
47B9DB 4 # EBP ADD 83C504
47B9DE 0 [EBP] EBX ADD 035D00
47B9E1 4 # EBP ADD 83C504
47B9E4 0 [EBP] EAX MOV 8B4500
47B9E7 EBX 0 [EBP] MOV 895D00
47B9EA EAX EBX MOV 8BD8
47B9EC 4 # EBP SUB 83ED04
47B9EF EBX 0 [EBP] MOV 895D00
47B9F2 0 [ESP] EBX MOV 8B1C24
47B9F5 0 [EBX] EBX MOV 8B1B
47B9F7 0 [EBP] EAX MOV 8B4500
47B9FA EBX MUL F7E3
47B9FC EAX EBX MOV 8BD8
47B9FE 4 # EBP ADD 83C504
47BA01 0 [EBP] EBX ADD 035D00
47BA04 4 # EBP ADD 83C504
47BA07 4 # EBP SUB 83ED04
47BA0A EBX 0 [EBP] MOV 895D00
47BA0D EBX POP 5B
47BA0E C # EBX ADD 83C30C
47BA11 0 [EBP] EBX ADD 035D00
47BA14 4 # EBP ADD 83C504
47BA17 RET C3 ok

Now here is how I write a defining word for a 3-dimensional array:

: <3array> { dim1 dim2 dim3 siz1 name | adr siz siz2 siz3 -- }
dim1 siz1 * to siz2 dim2 siz2 * to siz3 dim3 siz3 * to siz
align here to adr siz allot
name get-current :name
dim3 <check3>
siz3 lit*, swap,
dim2 <check2>
siz2 lit*, +, swap,
dim1 <check1>
siz1 lit*, +, adr lit+, ;,
c" ^" name get-current :2name
adr lit, ;,
c" lim-" name get-current :2name
adr siz + lit, ;,
name c" -zero" get-current :2name
adr lit, siz lit, s" erase ; " evaluate
name c" -size" get-current :2name
siz1 lit, ;,
name c" -dim" get-current :2name
dim1 lit, dim2 lit, dim3 lit, ;, ;

: 3array ( dim1 dim2 dim3 size -- )
bl word hstr dup >r <3array> r> dealloc ;

For simplicity, I have set BOUNDS-CHECK? to FALSE (so that <CHECK1>
etc. do nothing) so that mine would be comparable to yours, which
doesn't have any bounds checking. Here is an example of my array:

5 7 9 cell 3array mine ok
see mine
47CF2F 8C # EAX MOV B88C000000
47CF34 EBX MUL F7E3
47CF36 EAX EBX MOV 8BD8
47CF38 0 [EBP] EAX MOV 8B4500
47CF3B EBX 0 [EBP] MOV 895D00
47CF3E EAX EBX MOV 8BD8
47CF40 14 # EAX MOV B814000000
47CF45 EBX MUL F7E3
47CF47 EAX EBX MOV 8BD8
47CF49 0 [EBP] EBX ADD 035D00
47CF4C 4 # EBP ADD 83C504
47CF4F 0 [EBP] EAX MOV 8B4500
47CF52 EBX 0 [EBP] MOV 895D00
47CF55 EAX EBX MOV 8BD8
47CF57 EBX 2 # SHL C1E302
47CF5A 0 [EBP] EBX ADD 035D00
47CF5D 4 # EBP ADD 83C504
47CF60 47CA2C # EBX ADD 81C32CCA4700
47CF66 RET C3 ok

You will notice that my array has 19 instructions and one CALL. By
comparison, your array has 1+44=45 instructions and two calls. Your
code is over twice as long as mine. Because you have two CALL
instructions rather than just one, the speed difference is even
greater. It is not exactly an order of magnitude slower, but it is
several times slower.

> Fortunately, I've never been seriously tempted to throw someone out of
> one of my classes.

You once told me that, in all of your years of teaching the novice
class, no student had ever pointed out the problem with CREATE DOES>
words only associating *one* action to the data type. I was well aware
of this problem way back in 1985 when I was a teenager programming on
my Commodore-64. I find it extremely difficult to believe that *none*
of your students ever noticed this obvious problem. I think that it is
much more likely that *all* of your students noticed this problem, but
they were just playing dumb in your class in order to graduate.
Employees don't point out problems to the boss! Playing dumb is an old
tradition among employees. The boss will say something that makes no
sense, and the employees will just bob their heads up and down and
declare: "Yes, yes, you are so right..." This is nothing to be proud
of; it is just the way that people keep their jobs. I don't work for
Forth Inc. however, so I feel free to point out problems with
SwiftForth as I have done in the disassembly above.

If I were to attend your novice class and *not* get thrown out, then
this would imply that I am a novice Forth programmer. I would never
allow myself to graduate from your novice class because of the
implication that doing so makes me a novice. The entire C community
believes that *every* Forth programmer is a novice, and that we will
*never* be anything except novices. I'm not going to admit to being a
novice myself though. They can say what they will, but I'm not going
to denounce myself.

Andrew Haley

unread,
Jun 9, 2010, 4:50:37 AM6/9/10
to
Hugh Aguilar <hughag...@yahoo.com> wrote:

> With a CREATE DOES> word, the user just has to remember what order
> the constant data were comma'd in and extract the data
> correctly. This is a common source of bugs (for me) --- using the
> wrong offsets to get the data out of the body.

That's what structures are for.

> This is the way that a typical graduate of your novice class would
> write a definer for a 3-dimensional array:
>
> : slow-3array ( dim1 dim2 dim3 siz1 -- )
> create
> dup ,
> >r rot r> * dup ,
> rot * dup ,
> * allot
> does> \ x1 x2 x3 base-adr -- element-adr
> >r
> r@ 2 cells + @ * swap
> r@ 1 cells + @ * + swap
> r@ @ * + r> 3 cells + + ;

That's pretty horrible code.

> You once told me that, in all of your years of teaching the novice
> class, no student had ever pointed out the problem with CREATE DOES>
> words only associating *one* action to the data type.

I should certainly hope not: it's a fairly simple exercise to write a
DOES> word that has more than one runtime action. It's certainly
something that a reasonably bright programmer should be able to do at
then end of a one week introductory course.

Andrew.

Howerd

unread,
Jun 9, 2010, 5:28:19 AM6/9/10
to
Hi Hugh,

What Forth system are you using to compile 3array ?
I couldn't get it to work on either SwiftForth or Win32Forth (V6.14).

At a glance it looks like slow-3array doesn't work - I think the the
size field is comma'd in first but accessed last.

Do you have a test function to make sure that these words work?

I am curious to know why your version is faster (if it is - I always
thought that CREATE ... DOES> had a very small overhead) - but its
difficult to make comparisons without being able to compile the
code ;)

Best Regards

Howerd


On 8 June, 22:30, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> On Jun 8, 12:54 am, Elizabeth D Rather <erat...@forth.com> wrote:

[snip]

Andrew Haley

unread,
Jun 9, 2010, 5:45:36 AM6/9/10
to
Howerd <how...@yahoo.co.uk> wrote:
> Hi Hugh,
>
> What Forth system are you using to compile 3array ?
> I couldn't get it to work on either SwiftForth or Win32Forth (V6.14).
>
> At a glance it looks like slow-3array doesn't work - I think the the
> size field is comma'd in first but accessed last.
>
> Do you have a test function to make sure that these words work?
>
> I am curious to know why your version is faster (if it is - I always
> thought that CREATE ... DOES> had a very small overhead) - but its
> difficult to make comparisons without being able to compile the
> code ;)

This one has come up before; I don't know if you've see it. See my
response at

http://groups.google.co.uk/group/comp.lang.forth/msg/62f36fc148b22eb1?hl=en

Andrew.

Stephen Pelc

unread,
Jun 9, 2010, 7:09:07 AM6/9/10
to
On Tue, 8 Jun 2010 14:30:00 -0700 (PDT), Hugh Aguilar
<hughag...@yahoo.com> wrote:

>One problem with CREATE DOES> is that you have *two* CALL instructions
>(contrary to your statement above that there is only *one* CALL
>instruction). You have one CALL into the defined word. It does another
>CALL into the DOES> code. This CALL puts the body address on the
>return stack. The DOES> code does an R> to move that address to the
>parameter stack and then falls into the code that that user wrote.
>Having a CALL that goes directly into another CALL kills the speed.

You have been told before that is an issue about the compiler. It is
not intrinsic to CREATE ... DOES>. See the following transcript from
VFX Forth.

VFX Forth for Windows IA32
© MicroProcessor Engineering Ltd, 1998-2010

Version: 4.41 [build 3094]
Build date: 6 April 2010

Free dictionary = 7591214 bytes [7413kb]


: slow-3array ( dim1 dim2 dim3 siz1 -- )
create
dup ,
>r rot r> * dup ,
rot * dup ,
* allot
does> \ x1 x2 x3 base-adr -- element-adr
>r
r@ 2 cells + @ * swap
r@ 1 cells + @ * + swap

r@ @ * + r> 3 cells + + ; ok
ok
ok


5 7 9 cell slow-3array test ok

ok
dis test
TEST is a child of SLOW-3ARRAY
Data area contains
004C:0BB0 04 00 00 00 14 00 00 00 8C 00 00 00 00 00 00 00
...............

SLOW-3ARRAY
( 004C0B00 E8DB16F8FF ) CALL 004421E0 CREATE
( 004C0B05 8B1524214000 ) MOV EDX, [00402124]
( 004C0B0B 891A ) MOV 0 [EDX], EBX
( 004C0B0D 83C204 ) ADD EDX, 04
( 004C0B10 891524214000 ) MOV [00402124], EDX
( 004C0B16 53 ) PUSH EBX
( 004C0B17 5B ) POP EBX
( 004C0B18 0FAF5D08 ) IMUL EBX, [EBP+08]
( 004C0B1C 8B1524214000 ) MOV EDX, [00402124]
( 004C0B22 891A ) MOV 0 [EDX], EBX
( 004C0B24 83C204 ) ADD EDX, 04
( 004C0B27 891524214000 ) MOV [00402124], EDX
( 004C0B2D 0FAF5D04 ) IMUL EBX, [EBP+04]
( 004C0B31 8B1524214000 ) MOV EDX, [00402124]
( 004C0B37 891A ) MOV 0 [EDX], EBX
( 004C0B39 83C204 ) ADD EDX, 04
( 004C0B3C 891524214000 ) MOV [00402124], EDX
( 004C0B42 0FAF5D00 ) IMUL EBX, [EBP]
( 004C0B46 011D24214000 ) ADD [00402124], EBX
( 004C0B4C 8B5D0C ) MOV EBX, [EBP+0C]
( 004C0B4F 8D6D10 ) LEA EBP, [EBP+10]
( 004C0B52 E8E93DF8FFF20A4C00 ) CALL 00444940
DOES_SIN 004C0AF2
( 004C0B5B E8F016F8FF ) CALL 00442250
NO-COMPILER
( 004C0B60 E88FEFF4FF ) CALL 0040FAF4
(;CODE)
( 004C0B65 8B1424 ) MOV EDX, [ESP]
( 004C0B68 0FAF5A08 ) IMUL EBX, [EDX+08]
( 004C0B6C 8B1424 ) MOV EDX, [ESP]
( 004C0B6F 8B4A04 ) MOV ECX, [EDX+04]
( 004C0B72 0FAF4D00 ) IMUL ECX, [EBP]
( 004C0B76 03D9 ) ADD EBX, ECX
( 004C0B78 8B1424 ) MOV EDX, [ESP]
( 004C0B7B 8B0A ) MOV ECX, 0 [EDX]
( 004C0B7D 0FAF4D04 ) IMUL ECX, [EBP+04]
( 004C0B81 03D9 ) ADD EBX, ECX
( 004C0B83 5A ) POP EDX
( 004C0B84 83C20C ) ADD EDX, 0C
( 004C0B87 03DA ) ADD EBX, EDX
( 004C0B89 8D6D08 ) LEA EBP, [EBP+08]
( 004C0B8C C3 ) NEXT,
( 141 bytes, 39 instructions )
ok
1 2 3 test ok-1
.s
DATA STACK
top
4984204 004C:0D8C
ok-1
. 4984204 ok
ok
ok
: t 1 2 3 test ; ok
dis t
T
( 004C10C0 68B00B4C00 ) PUSH 004C0BB0
( 004C10C5 6B15B80B4C0003 ) IMUL EDX, [004C0BB8], # 03
( 004C10CC 8B0C24 ) MOV ECX, [ESP]
( 004C10CF 6B410402 ) IMUL EAX, [ECX+04], # 02
( 004C10D3 03C2 ) ADD EAX, EDX
( 004C10D5 8B1424 ) MOV EDX, [ESP]
( 004C10D8 0302 ) ADD EAX, 0 [EDX]
( 004C10DA 5A ) POP EDX
( 004C10DB 83C20C ) ADD EDX, 0C
( 004C10DE 03D0 ) ADD EDX, EAX
( 004C10E0 8D6DFC ) LEA EBP, [EBP+-04]
( 004C10E3 895D00 ) MOV [EBP], EBX
( 004C10E6 8BDA ) MOV EBX, EDX
( 004C10E8 C3 ) NEXT,
( 41 bytes, 14 instructions )
ok
: t2 test ; ok
dis t2
T2
( 004C1110 68B00B4C00 ) PUSH 004C0BB0
( 004C1115 0FAF1DB80B4C00 ) IMUL EBX, [004C0BB8]
( 004C111C 8B1424 ) MOV EDX, [ESP]
( 004C111F 8B4A04 ) MOV ECX, [EDX+04]
( 004C1122 0FAF4D00 ) IMUL ECX, [EBP]
( 004C1126 03D9 ) ADD EBX, ECX
( 004C1128 8B1424 ) MOV EDX, [ESP]
( 004C112B 8B0A ) MOV ECX, 0 [EDX]
( 004C112D 0FAF4D04 ) IMUL ECX, [EBP+04]
( 004C1131 03D9 ) ADD EBX, ECX
( 004C1133 5A ) POP EDX
( 004C1134 83C20C ) ADD EDX, 0C
( 004C1137 03DA ) ADD EBX, EDX
( 004C1139 8D6D08 ) LEA EBP, [EBP+08]
( 004C113C C3 ) NEXT,
( 45 bytes, 15 instructions )
ok

Stephen


--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

Howerd

unread,
Jun 9, 2010, 9:10:36 AM6/9/10
to
Hi Andrew,

I'm not concerned (at the moment) about the advisability of using
evaluate, or even locals, or Hugh's approach to discussing software -
just that Hugh has presented two pieces of Forth code one of which
looks incorrect and the other of which I can't compile...

What I did notice in Hugh's 3array is that there is some compile time
calculation before values are comma'd in which seems to make the run-
time code faster. Nothing to do with create...does though.

Best Regards

Howerd

On 9 June, 10:45, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> http://groups.google.co.uk/group/comp.lang.forth/msg/62f36fc148b22eb1...
>
> Andrew.

John Passaniti

unread,
Jun 9, 2010, 11:35:14 AM6/9/10
to
On Jun 9, 4:50 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> That's pretty horrible code.

The n-dimensional array code you posted in the past was much nicer,
clearer, and as if it really mattered, faster. It's certainly clearer
than the macro-like method (and the necessary supporting code) he
offers in his "novice" package. Hugh has stated that his "novice"
package would be used as a library, without the user really having to
understand what is going on. Well, okay, but your solution is smaller
and simpler and *can* be understood by a novice!

What I find especially funny about Hugh's singular fascination with
speed is that he measures the speed of access to a n-dimensional array
assuming that you always want to pass all n-dimensions when indexing
it. And while I'm sure there are applications where one needs to
randomly hop around inside an array, in my experience, that rarely
happens. Far more often, you're indexing only a single slice of the
array at a time, and often you're moving sequentially through that
slice. So once you've got a base address, it's both simpler and
faster to avoid indexing the array and instead just add displacements
to some base address.

Or put another way, Hugh's code may indeed be faster if what you're
measuring is the speed of a single access of an array element using
all n-dimensions. But Hugh's code will be *far* slower when one
considers the normal and trivial optimizations one makes when
implementing real-world algorithms.

Hugh may indeed be able to show that for a particular compiler, his
method is faster. But that's a lot like saying that if I took

> I should certainly hope not: it's a fairly simple exercise to write a
> DOES> word that has more than one runtime action.  It's certainly
> something that a reasonably bright programmer should be able to do at
> then end of a one week introductory course.

Hugh has arbitrarily decided that any dispatching mechanism isn't
"Forth-like" and so any of the canonical solutions (such as a class-
oriented pointer to a table of xt's or a prototype-oriented copy of
that table) aren't considered. Even though a basic dispatcher needs
to be little more than:

... does> swap cells + @ execute ;

Personally, instead of worrying if something is "Forth-like" which is
inherently subjective, I think a better metric to judge code is if it
does what it needs to do in a clear and efficient manner. But I'm
crazy like that.

Marcel Hendrix

unread,
Jun 9, 2010, 2:50:28 PM6/9/10
to
steph...@mpeforth.com (Stephen Pelc) wrote Re: I & J
[..]

> : slow-3array ( dim1 dim2 dim3 siz1 -- )
> create
> dup ,
> >r rot r> * dup ,
> rot * dup ,
> * allot
> does> \ x1 x2 x3 base-adr -- element-adr
> >r
> r@ 2 cells + @ * swap
> r@ 1 cells + @ * + swap
> r@ @ * + r> 3 cells + + ; ok
[..]

iForth version 4.0.271, generated 13:30:07, May 23, 2010.
x86_64 binary, native floating-point, extended precision.
Copyright 1996 - 2010 Marcel Hendrix.
[..]


> 5 7 9 cell slow-3array test ok
ok

FORTH> : t 1 2 3 test ;
Redefining t ok
FORTH> see t
Flags: TOKENIZE, ANSI


: t 1 2 3 test ; ok

FORTH> : t 1 2 3 test @ . ;
Redefining t ok
FORTH> see t
Flags: ANSI
$01243080 : t
$0124308A mov rbx, $012421E0 qword-offset
$01243091 lea rbx, [rbx*2 0 +] qword
$01243099 add rbx, $012421E0 qword-offset
$012430A0 mov rdi, $012421D8 qword-offset
$012430A7 lea rbx, [rbx rdi*2] qword
$012430AB add rbx, $012421D0 qword-offset
$012430B2 push [rbx $012421E8 +] qword
$012430B8 jmp .+10 ( $011391E2 ) offset NEAR

Why multiply?

-marcel

Message has been deleted

Hugh Aguilar

unread,
Jun 9, 2010, 3:24:49 PM6/9/10
to
On Jun 9, 2:50 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> > With a CREATE DOES> word, the user just has to remember what order
> > the constant data were comma'd in and extract the data
> > correctly. This is a common source of bugs (for me) --- using the
> > wrong offsets to get the data out of the body.
>
> That's what structures are for.

If your FIELD word is written in terms of CREATE DOES>, then that is
another two CALL instructions.

I agree that the SLOW-3ARRAY I wrote was "pretty horrible," but I
wrote it so that it would be fast. It still came out to 45
instructions, which is over twice what mine was. If I had written it
to use a structure to define the offsets, it would have likely been
over 100 instructions. Then everybody would complain that I had
purposely made it inefficient.

> I should certainly hope not: it's a fairly simple exercise to write a
> DOES> word that has more than one runtime action.  It's certainly
> something that a reasonably bright programmer should be able to do at
> then end of a one week introductory course.

Yes, but this is going to involve *run-time* dispatching. The
execution speed is going to be abysmal.


Hugh Aguilar

unread,
Jun 9, 2010, 3:34:26 PM6/9/10
to
On Jun 9, 3:45 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Howerd <howe...@yahoo.co.uk> wrote:
> > Hi Hugh,
>
> > What Forth system are you  using to compile 3array ?
> > I couldn't get it to work on either SwiftForth or Win32Forth (V6.14).

You have to include the whole novice.4th file, as 3ARRAY uses
functions previously defined in it.

> > At a glance it looks like slow-3array doesn't work - I think the the
> > size field is comma'd in first but accessed last.

It works.

> This one has come up before; I don't know if you've see it.  See my
> response at
>

> http://groups.google.co.uk/group/comp.lang.forth/msg/62f36fc148b22eb1...

I've seen this code before and I was very impressed.

It has the same number of words executed at run-time, so it should run
at the same speed, assuming that multiplication is fast. On a
processor without hardware multiply, it should run faster. The
downside of your implementation is that it uses a *lot* of memory,
with all those arrays of pointers. On a desktop computer, this isn't a
problem (that is why this algorithm is used in Java), but on a micro-
controller that much memory usage wouldn't be acceptable.

I might put this code in my novice package as an alternative to my
array words such as 3ARRAY.

BTW, there is a typo in my 2ARRAY that causes a bug. This will be
fixed in my upgrade to the novice package later today. This is the
only bug I have found in any of that code.

Hugh Aguilar

unread,
Jun 9, 2010, 3:36:25 PM6/9/10
to
On Jun 9, 7:10 am, Howerd <howe...@yahoo.co.uk> wrote:
> What I did notice in Hugh's 3array is that there is some compile time
> calculation before values are comma'd in which seems to make the run-
> time code faster. Nothing to do with create...does though.

I did the same compile-time calculation in both 3ARRAY and
SLOW-3ARRAY.

Hugh Aguilar

unread,
Jun 9, 2010, 3:46:39 PM6/9/10
to
On Jun 9, 5:09 am, stephen...@mpeforth.com (Stephen Pelc) wrote:
> You have been told before that is an issue about the compiler. It is
> not intrinsic to CREATE ... DOES>. See the following transcript from
> VFX Forth.

It *is* an intrinsic problem with CREATE DOES>. The compiler has no
way of knowing if those data comma'd in after the CREATE are constants
(they usually are) or are volatile (as is the case in DEFER when IS
can change that data). To be safe, the compiler has to fetch those
values out of memory at run-time, because it doesn't know if the
values have been changed or not from the time when they were comma'd
in at compile-time.

Some compilers (yours, afaik) have some kind of non-standard
declarations that tip off the compiler to the fact that the comma'd in
data are non-volatile and can be compiled as constant literals. This
is just a work-around to the problem with ANS-Forth's CREATE DOES>
words. My code is a work-around too, but at least my code is ANS-Forth
standard.

I think that my code will be faster than comparable CREATE DOES> code
on *every* compiler. The difference in speed may be less pronounced on
a compiler that does optimization, as compared to SwiftForth that does
very little optimization. There will always be a difference in speed
though.

Besides the speed issue, my code has the advantage of generating
several helper colon words. With CREATE DOES>, helper words would have
to be written as colon words that access the data using ' and >BODY
--- that is pretty messy.

Hugh Aguilar

unread,
Jun 9, 2010, 3:53:41 PM6/9/10
to
On Jun 9, 12:50 pm, m...@iae.nl (Marcel Hendrix) wrote:
> stephen...@mpeforth.com (Stephen Pelc) wrote Re: I & J

My LIT*, doesn't multiply, but uses a shift, when the dimension is a
power of two. This is why I say in my documentation that the
dimensions should be made powers of two on a micro-controller without
a hardware multiply for a speed boost, assuming that the extra memory
usage is acceptable.

You code is relying on the dimensions being very small. That is not
going to work every time.


Stephen Pelc

unread,
Jun 9, 2010, 3:51:48 PM6/9/10
to
On Wed, 9 Jun 2010 20:50:28 +0200, m...@iae.nl (Marcel Hendrix) wrote:

>Why multiply?

Oh no, it's compiler war time again!

Marcel Hendrix

unread,
Jun 9, 2010, 4:10:40 PM6/9/10
to
steph...@mpeforth.com (Stephen Pelc) wrote Re: I & J

> On Wed, 9 Jun 2010 20:50:28 +0200, m...@iae.nl (Marcel Hendrix) wrote:

>>Why multiply?

> Oh no, it's compiler war time again!

If a non-standard word like CONST-DATA (didn't Anton promote
FAST-DOES> a while ago?) is allowed:

FORTH> : slow-3array ( dim1 dim2 dim3 siz1 -- )
<3>[FORTH>] create
<3>[FORTH>] HERE >R
<3>[FORTH>] dup ,
<3>[FORTH>] >r rot r> * dup ,
<3>[FORTH>] rot * dup ,
<3>[FORTH>] * allot
<3>[FORTH>] R> 3 CELLS CONST-DATA
<3>[FORTH>] does> \ x1 x2 x3 base-adr -- element-adr
<3>[FORTH>] >r
<3>[FORTH>] r@ 2 cells + @ * swap
<3>[FORTH>] r@ 1 cells + @ * + swap
<3>[FORTH>] r@ @ * + r> 3 cells + + ; ok
FORTH> ok
FORTH> 5 7 9 cell slow-3array test ok
FORTH> : tt 1 2 3 test @ . ; ok
FORTH> see tt
Flags: ANSI
$01243000 : tt
$0124300A push $012425C8 qword-offset
$01243010 jmp .+10 ( $011391E2 ) offset NEAR
$01243015 ;

For those special jobs :-)

-marcel

John Passaniti

unread,
Jun 9, 2010, 5:06:25 PM6/9/10
to
On Jun 9, 3:24 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> Yes, but this is going to involve *run-time* dispatching. The
> execution speed is going to be abysmal.

Really? You've *measured* it in a real-world application? Nah, I
didn't think so.

You keep looking at the speed of things in isolation. You're like the
junior programmer I once mentored who spent a day optimizing a math
routine. He ended up making it something like ten times faster than
the original. I just looked at him, and said, "well, that's great,
but that routine gets called once, during initialization." He ended
up saving a millisecond. I then pointed out to him the wide variety
of routines that profiling showed were called many times and were the
actual bottlenecks in the system. Taking one near the top of the
list, I made a minor optimization on a routine that was called many
times, and instantly, measurement showed I shaved off several hundred
milliseconds. I then asked him what was a more productive use of his
time.

Hugh Aguilar

unread,
Jun 9, 2010, 5:54:55 PM6/9/10
to
On Jun 9, 2:10 pm, m...@iae.nl (Marcel Hendrix) wrote:
> If a non-standard word like CONST-DATA (didn't Anton promote
> FAST-DOES> a while ago?) is allowed:

My novice package was written while wearing the ANS-Forth strait-
jacket.

The point that I'm trying to make here is that ANS-Forth is a screwed
up standard. This is the year 2010 and we are trying to figure out a
way to implement arrays efficiently without breaking the standard. The
same thing is true of the FIELD word, as I pointed out earlier. You
know, there is a reason why C programmers consider Forth programmers
to be perpetual novices!

I would like to see :NAME become a part of the standard. There are
other possibilities, such as CONST-DATA, that would help. I'm pretty
dubious of CONST-DATA because that requires an optimizing compiler to
take advantage of, and we still have the problem of only one action
per data type. With a little bit of thought, though, I think we could
come up with some kind of solution.

The problem is that we have too much of a focus on the past, extending
back to the 1970s, and there is no effort being made to fix the
glaring problems in Forth. This reminds me of that IBM370 assembly-
language job that I had. I noticed that pretty much all of the
functions in the library, were written by somebody named Gary L.. This
guy was supposedly *very* smart, and he figured everything out, so all
we had to do was use his library and we were set. Doing anything
differently, was taboo, as none of us were considered to be very smart
(official company policy!). I had to ask: "What became of this very
smart programmer?" The answer was that he had quit the company many
years ago and started his own company. The implication was obvious ---
leaving the company and being very smart were synonymous. lol Of
course, I called the guy up and asked if he was hiring, because I
wanted to be very smart too, but he wasn't interested. I see the same
situation with ANS-Forth. We are just hanging on to all of this old
code that Chuck Moore (very smart!) wrote back in the 1970s. He
himself is just ignoring the ANS-Forth standard though. That really
says a lot about the value of the ANS-Forth standard! Code such as
CREATE DOES> was pretty cool for its era, but it isn't impressing
anybody today. We could really think about moving forward a little
bit.

When I pushed the idea of a double-precision division for my continued
fraction program, E.R. told me that, if Chuck Moore were writing the
program, he would figure out a way to do it using mixed-precision
arithmetic. Well, that would be just great, if Chuck Moore was at my
beck and call, but unfortunately he isn't. I actually think mixed-
precision arithmetic is a wonderful idea, and it is generally more
efficient than double-precision (compared to C where people just make
their data LONG), but I don't consider this to be a good reason to
*ban* double-precision arithmetic. BTW, I did get my continued
fraction program to work using mixed-precision arithmetic, and it will
be in the next novice release. I got the code from Nathanial
Grossman's article in Forth Dimensions (Sept/Oct 1984). I have no idea
how the double-precision division works. That kind of math is beyond
me, and I doubt that I'm the only one either. I do know that his code
is hugely inefficient though, because it is written in Forth rather
than assembly-language. For the continued fractions, this inefficiency
is acceptable because the program isn't time critical (it is used for
finding rational approximations to be used with */ and M*/). For a
program that has to be fast though, it would not be acceptable. E.R.'s
idea of banning double-precision division just blows my mind, and I'm
not the only one either. I think most novices who try to learn Forth
run into these mind-blowing problems and they just quit --- they go
back to C or whatever, and they don't give Forth any further
consideration. That is why there are so few Forth programmers in the
world today. The hurdles involved in learning Forth, such as writing
your own division routine, are too high.

John Passaniti

unread,
Jun 10, 2010, 1:13:17 AM6/10/10
to
On Jun 9, 5:54 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> The point that I'm trying to make here is that ANS-Forth is a screwed
> up standard. This is the year 2010 and we are trying to figure out a
> way to implement arrays efficiently without breaking the standard.

Actually, nobody is having a problem implementing efficient arrays.
That's because a programmer who has knowledge of the application's
needs, the underlying platform, and the global optimizer that's
between their ears can beat the pants off your "novice" package. Your
arrays optimize for a particular case-- a BASIC-like notion of arrays
where all subscripts are always provided. But that's artificial and
doesn't relate to the vast majority of actual usage patterns real-
world programs. It's far more common to step through members of an
array by a constant displacement, and that is *always* going to be
faster than what you provided. Always.

Andrew Haley

unread,
Jun 10, 2010, 5:29:38 AM6/10/10
to
Hugh Aguilar <hughag...@yahoo.com> wrote:
> On Jun 9, 2:50?am, Andrew Haley <andre...@littlepinkcloud.invalid>

> wrote:
>> Hugh Aguilar <hughaguila...@yahoo.com> wrote:
>> > With a CREATE DOES> word, the user just has to remember what order
>> > the constant data were comma'd in and extract the data
>> > correctly. This is a common source of bugs (for me) --- using the
>> > wrong offsets to get the data out of the body.
>>
>> That's what structures are for.
>
> If your FIELD word is written in terms of CREATE DOES>, then that is
> another two CALL instructions.

Perhaps. It depends on the implementation. All that the field words
have do do is add a small constant: how they do it is up to them.
Besides, all this obsession with trivial micro-optimization doesn't
help that much in practice.

>> I should certainly hope not: it's a fairly simple exercise to write a
>> DOES> word that has more than one runtime action. It's certainly
>> something that a reasonably bright programmer should be able to do at
>> then end of a one week introductory course.
>
> Yes, but this is going to involve *run-time* dispatching. The
> execution speed is going to be abysmal.

I doubt it. It may be a bit slower, but most of the time that doesn't
matter. In the few cases where the odd nanosecond does matter, it
makes sense to go back and worry about micro-optimization, but if
every line of code is written with only speed in mind the application
will be a pig to maintain. It makes much more sense to write the code
as clearly and simply as possible. Most of the time it'll be plenty
fast enough: the design and algorithms used have a far greater effect.

Andrew.

Andrew Haley

unread,
Jun 10, 2010, 5:35:03 AM6/10/10
to
Hugh Aguilar <hughag...@yahoo.com> wrote:

> It has the same number of words executed at run-time, so it should run
> at the same speed, assuming that multiplication is fast. On a
> processor without hardware multiply, it should run faster. The
> downside of your implementation is that it uses a *lot* of memory,
> with all those arrays of pointers.

No it doesn't, if you look carefully. For example, in a 10 x 10 array,
the overhead is 10%. In a 10 x 10 x 10 array, the overhead is 11%. Etc.

Again, the thing that really matters is clarity and simplicity.

Andrew.

Albert van der Horst

unread,
Jun 10, 2010, 6:45:43 AM6/10/10
to
In article <5308e9a7-9701-40af...@y11g2000yqm.googlegroups.com>,
Hugh Aguilar <hughag...@yahoo.com> wrote:
>On Jun 9, 2:50=A0am, Andrew Haley <andre...@littlepinkcloud.invalid>

>wrote:
>> Hugh Aguilar <hughaguila...@yahoo.com> wrote:
>> > With a CREATE DOES> word, the user just has to remember what order
>> > the constant data were comma'd in and extract the data
>> > correctly. This is a common source of bugs (for me) --- using the
>> > wrong offsets to get the data out of the body.
>>
>> That's what structures are for.
>
>If your FIELD word is written in terms of CREATE DOES>, then that is
>another two CALL instructions.

I guess that pretty much nails it down. TWO CALL INSTRUCTIONS!
That must be the end of it.

Groetjes Albert

Oops, I forgot something:
:-)

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Hugh Aguilar

unread,
Jun 10, 2010, 2:15:51 PM6/10/10
to
On Jun 10, 4:45 am, Albert van der Horst <alb...@spenarnc.xs4all.nl>
wrote:
> In article <5308e9a7-9701-40af-909a-e5ddf640a...@y11g2000yqm.googlegroups.com>,

> Hugh Aguilar  <hughaguila...@yahoo.com> wrote:
> >If your FIELD word is written in terms of CREATE DOES>, then that is
> >another two CALL instructions.
>
> I guess that pretty much nails it down. TWO CALL INSTRUCTIONS!
> That must be the end of it.

It is not just two CALL instructions for the entire word, it is two
CALL instructions for *each* field access in the word. For the 3D
array, there are three accesses to fields, so that would be six CALL
instructions. All of those CALL instructions really add up in regard
to time.

I use structures a lot. They are extremely useful in making Forth code
readable. Functions such as SLOW-3ARRAY, in which I purposely avoided
the use of a structure in order to make it fast, can be horribly ugly.
When I am programming in a more normal manner, without undue concern
about speed, I use structures quite a lot, and I also use my linked-
list package. This is why I put some effort into making FIELD
efficient. If you look at my novice package you will see that FIELD
generates an immediate word, and that this immediate word in turn
generates the field access code inline. My field accesses don't have
*any* CALL instructions.

I am well aware of the importance of focusing primarily on getting a
program to work, and ignoring the speed issue until it becomes
important. If I wasn't aware of such elementary programming I wouldn't
be able to write big programs. I wrote that slide-rule program in
slightly over a month, which is pretty fast programming considering
what it does.

Using this FIELD, which is typical of how a graduate of E.R.'s novice
class writes Forth software, the slide-rule program takes 21 seconds
to run:

: field ( offset size -- new-offset )
create
over , +
does> ( struct-adr base-adr -- field-adr )
@ + ;

Using this FIELD, which I made available in the novice package, the
program takes 9 seconds to run:

: <field> ( offset -- ) \ run-time: struct-adr -- field-adr
?dup if >r
: state@, if,
r@ lit, postpone lit+, else, r@ lit+, then, ;,
immediate
r> drop
else
: ;, immediate \ the first field does nothing whatsoever
then ;

: field ( offset size -- new-offset ) \ stream: name
over <field> + ;

Note that you have to include the novice.4th package to get this,
because it uses words such as STATE@, and LIT, that are in the novice
package.

John Passaniti

unread,
Jun 11, 2010, 2:44:52 PM6/11/10
to
On Jun 10, 2:15 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> Using this FIELD, which is typical of how a graduate of E.R.'s novice
> class writes Forth software, the slide-rule program takes 21 seconds
> to run:
>
> : field ( offset size -- new-offset )
>     create
>         over ,  +
>     does> ( struct-adr base-adr -- field-adr )
>         @ + ;

Simple. Small. Understandable. Easy to tell at a glance what it
does and that it is correct. This is the essence of good Forth code.
This is exactly the kind of code that should be taught in a class for
those learning Forth. This is the kind of code that a "novice" feels
empowered by because the mechanism used to define fields is there,
clearly exposed.

> Using this FIELD, which I made available in the novice package, the
> program takes 9 seconds to run:
>
> : <field> ( offset -- )   \ run-time: struct-adr -- field-adr
>     ?dup if  >r
>         :  state@,  if,
>             r@ lit,  postpone lit+,  else,  r@ lit+,  then,  ;,
>         immediate
>         r> drop
>     else
>         :  ;,  immediate  \ the first field does nothing whatsoever
>     then ;
>
> : field ( offset size -- new-offset )           \ stream: name
>     over <field>  + ;

Complicated. Larger. Not understandable without knowing your private
definitions. Takes even those with lots of Forth experience much more
than a glance to know what it does and if it is correct. This is code
that someone who unnecessarily stresses speed over clarity would
write.

This is awful code to give someone new to the language, and even more
awful is your attitude that a "novice" doesn't have to understand the
code to use it. If we were talking about some complicated algorithm
or a large protocol, I can certainly see offering a library and saying
"you can use this without understanding it." But when we're talking
about something like arrays and structures-- basic, fundamental kinds
of things-- then your "you don't need to understand this" attitude is
toxic. I would much rather give someone new to Forth the tools to
understand how to construct such things themselves than giving them a
black box that doesn't give them anything more than a little bit more
speed. Do you really think someone you call a "novice" is going to
need that speed?

I take it you don't believe in the old adage "give a man a fish and
he'll eat for a day, teach a man to fish, and he'll eat for a
lifetime." Instead of empowering those new to the language, you want
to keep them dependent on your library. Sad.

Here's a better idea. Instead of calling your package "novice" how
about calling it "speed" and promote it as a way for advanced Forth
programmers to optimize their code.

What a waste of time and effort-- you saved 12 seconds of runtime off
a program where speed doesn't matter-- because you run it once to
generate data to drive a CNC machine. And chances are very good that
instead of over-optimizing this code, you could have better invested
your time thinking of different, more efficient algorithms that would
have either reduced the number of calls to "field" or eliminated their
need entirely.

> Note that you have to include the novice.4th package to get this,
> because it uses words such as STATE@, and LIT, that are in the novice
> package.

No thank you. I prefer to spend my time writing code that is correct,
clear, and elegant. If I later determine it needs to be faster, I'll
profile the code to identify hotspots, and then address what I find.
That's a much smarter use of time and effort.

Hugh Aguilar

unread,
Jun 11, 2010, 3:40:28 PM6/11/10
to
On Jun 10, 3:29 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Perhaps.  It depends on the implementation.  All that the field words
> have do do is add a small constant: how they do it is up to them.
> Besides, all this obsession with trivial micro-optimization doesn't
> help that much in practice.

By using my version of FIELD, rather than the version written with
CREATE DOES>, the time required for my slide-rule program to run went
from 21 seconds to 9 seconds. We are not talking about the "odd
nanosecond" here --- this is almost three times faster.

Note for mathematicians: anything over two is "almost three." :-)

> I doubt it.  It may be a bit slower, but most of the time that doesn't
> matter.  In the few cases where the odd nanosecond does matter, it
> makes sense to go back and worry about micro-optimization, but if
> every line of code is written with only speed in mind the application
> will be a pig to maintain.  It makes much more sense to write the code
> as clearly and simply as possible.  Most of the time it'll be plenty
> fast enough: the design and algorithms used have a far greater effect.

My slide-rule program is not "a pig to maintain." It is simple and
straight-forward. Most of the complexity, such as FIELD, is buried in
the novice package. That is information hiding.

Also, I don't think it is reasonable to expect me to think up a new
algorithm that doesn't involve using records, and linked-lists of
records. Yeah, that is a brilliant plan, and maybe I'll port the
program to Qbasic at the same time. LOL

Hugh Aguilar

unread,
Jun 11, 2010, 3:59:22 PM6/11/10
to
On Jun 10, 3:35 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

I actually like your iliff-vector arrays, so you needn't be so
defensive. To the best of my recollection, this is the only time that
I actually learned something on c.l.f..

I am putting your code in my novice package with a few enhancements. I
have two defining words:

: <ary> ( size 0 dims... name -- )
: ary ( size 0 dims... -- )

You can define a two dimensional array like this:

W 0 3 5 C" TEST" <ARY>
W 0 3 5 ARY TEST

The size parameter is the size of each element. In the above this is W
(1 CELLS), but the definers also support arrays of floats or arrays of
records, or whatever.

In either case, you get the following colon words defined:

: TEST ( indices... -- element-adr )
: <TEST ( -- base-adr )
: TEST> ( adr index -- element-adr )

TEST is used for accessing array elements with "random access."
Assuming that I and J are indices, you can use this:

I J TEST

This gives you the element address. In the above, I is in the range
[0,3) and J is in the range [0,5). They are in the same order as the
dimensions were.

You can also do this:

<TEST J -> I TEST>

Note that the user no longer uses [] because it is specific to word
arrays. The xxx> word should be used instead, because it knows how big
the elements are.

The three improvements are these:

1.) We now have elements of user-specified size, rather than being
limited to words.

2.) We now have the ability to pass the defining word the name on the
stack, rather than being limited to getting it out of the input
stream.

3.) We now have a defining word (either ARY or <ARY>) that works for
*any* number of dimensions. With your system, you had 6ARRAY, 5ARRAY,
etc., specific to how many dimensions there were.

Andrew Haley

unread,
Jun 12, 2010, 4:17:15 AM6/12/10
to
Hugh Aguilar <hughag...@yahoo.com> wrote:
> On Jun 10, 3:35?am, Andrew Haley <andre...@littlepinkcloud.invalid>

> wrote:
>> Hugh Aguilar <hughaguila...@yahoo.com> wrote:
>> > It has the same number of words executed at run-time, so it should run
>> > at the same speed, assuming that multiplication is fast. On a
>> > processor without hardware multiply, it should run faster. The
>> > downside of your implementation is that it uses a *lot* of memory,
>> > with all those arrays of pointers.
>>
>> No it doesn't, if you look carefully. For example, in a 10 x 10 array,
>> the overhead is 10%. In a 10 x 10 x 10 array, the overhead is 11%. Etc.
>>
>> Again, the thing that really matters is clarity and simplicity.
>
> I actually like your iliff-vector arrays, so you needn't be so
> defensive.

I'm not being defensive; I'm making a correction.

> To the best of my recollection, this is the only time that
> I actually learned something on c.l.f..
>
> I am putting your code in my novice package with a few enhancements.

Oh dear. Well, once I push some code out on c.l.f. people can do
anything they want.

> The three improvements are these:
>
> 1.) We now have elements of user-specified size, rather than being
> limited to words.
>
> 2.) We now have the ability to pass the defining word the name on the
> stack, rather than being limited to getting it out of the input
> stream.
>
> 3.) We now have a defining word (either ARY or <ARY>) that works for
> *any* number of dimensions. With your system, you had 6ARRAY, 5ARRAY,
> etc., specific to how many dimensions there were.

With the possible exception of 1., I think this is a mistake. The
fairly simple short code I posted can be used as a base for people to
define arrays of any kind, for whatever purpose they want.

Again, you're taking a simple problem and turning it into an
overgeneralized solution.

Andrew.

Andrew Haley

unread,
Jun 12, 2010, 4:23:11 AM6/12/10
to
John Passaniti <john.pa...@gmail.com> wrote:

> This is awful code to give someone new to the language, and even
> more awful is your attitude that a "novice" doesn't have to
> understand the code to use it. If we were talking about some
> complicated algorithm or a large protocol, I can certainly see
> offering a library and saying "you can use this without
> understanding it." But when we're talking about something like
> arrays and structures-- basic, fundamental kinds of things-- then
> your "you don't need to understand this" attitude is toxic. I would
> much rather give someone new to Forth the tools to understand how to
> construct such things themselves than giving them a black box that
> doesn't give them anything more than a little bit more speed. Do
> you really think someone you call a "novice" is going to need that
> speed?
>
> I take it you don't believe in the old adage "give a man a fish and
> he'll eat for a day, teach a man to fish, and he'll eat for a
> lifetime." Instead of empowering those new to the language, you
> want to keep them dependent on your library. Sad.

Mmm, exactly. I think you've really hit the nail on the head with
this reply. Forth is not a "back box" langauge, and if you try to use
it that way you may well end up with something like C, but worse.

Andrew.

Hugh Aguilar

unread,
Jun 12, 2010, 1:40:05 PM6/12/10
to
On Jun 12, 2:17 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Again, you're taking a simple problem and turning it into an
> overgeneralized solution.

You're being inconsistent. You said this previously about my arrays:

"It's a classic example of an overgeneralized brute force solution of
the kind much loved by some C programmers."

But then you present code that looks like this:

: 6array ( 0 i j k l m n)
create dimensions


does>
swap cells + @

swap cells + @
swap cells + @
swap cells + @
swap cells + @
swap cells + ;

That looks like brute-force cut-and-paste programming of the kind much
loved by some C programmers. If the user has to write similar
functions such as 5ARRAY, 4ARRAY, and so forth, there is going to be a
lot of cut and paste being done. Lets hope the user can count
carefully, so he gets the correct number of lines of code in each
function!

By comparison, with my system, ARY (or <ARY>) can be used for *any*
number of dimensions. This is bad???

I get the impression that most c.l.f. members just habitually put
everything down. I write a generalized array definer and everybody
puts it down. I write a version of FIELD that more than doubles the
speed of a typical program, and everybody puts it down. Etc., etc..
This makes me wonder exactly what I could write that you wouldn't put
down. This is just the way that the internet is though. An excellent
book on this subject is: "You are Not a Gadget" (Jaron Lanier). I
wrote a book review of it over here:

http://www.forthwiki.com/tiki-view_forum_thread.php?comments_parentId=8&topics_sort_mode=lastPost_desc&forumId=2

Andrew Haley

unread,
Jun 12, 2010, 2:47:14 PM6/12/10
to
Hugh Aguilar <hughag...@yahoo.com> wrote:
> On Jun 12, 2:17?am, Andrew Haley <andre...@littlepinkcloud.invalid>

> wrote:
>> Again, you're taking a simple problem and turning it into an
>> overgeneralized solution.
>
> You're being inconsistent. You said this previously about my arrays:
>
> "It's a classic example of an overgeneralized brute force solution of
> the kind much loved by some C programmers."
>
> But then you present code that looks like this:
>
> : 6array ( 0 i j k l m n)
> create dimensions
> does>
> swap cells + @
> swap cells + @
> swap cells + @
> swap cells + @
> swap cells + @
> swap cells + ;
>
> That looks like brute-force cut-and-paste programming of the kind much
> loved by some C programmers.

I don't think so.

Lat's try to remember how this started. Your claim was

> My arrays are at least twice as fast as any array written using
> CREATE DOES>, using any compiler.

and I needed to make it clear that your claim was nonsense.

It's an example of how you might write a six-dimensional array, if any
application actually needed one. (Which I very much doubt.) It's not
something that I was offering as a general-purpose solution. It's not
something I would expect anyone just to pick up and use.

I did consider

does>
5 0 do swap cells + @ loop
swap cells + ;

but I think that might have been harder to understand.

> If the user has to write similar functions such as 5ARRAY, 4ARRAY,
> and so forth, there is going to be a lot of cut and paste being
> done.

Then, and only then, and only if it's really needed.

At the time I said

>> So, given a couple of very simple array accessor words
>>
>> : [] ( a n - a') cells + ;
>> : -> ( a n - n) cells + @ ;
>>
>> to print an element of a 3d array called bar you'd
>>
>> bar k -> j -> i -> .
>>
>> which is, often, nicer to use than something like
>>
>> j k i bar @
>>
>> because it requires less stack dancing in use. It also gives you a
>> nice syntax to get at a slice of the array, which is something you
>> commonly need to do. For example,
>>
>> bar k -> j ->
>>
>> produces a single slice of the array. In practice this avoids a lot
>> of indexing, because words work on slices (or arrays of slices) rather
>> than the whole array. (Of course, you can create an array word that
>> takes six items on the stack, but you don't have to do it that way.)

Note the important point about slices.

> I get the impression that most c.l.f. members just habitually put
> everything down. I write a generalized array definer and everybody
> puts it down. I write a version of FIELD that more than doubles the
> speed of a typical program, and everybody puts it down. Etc., etc..
>
> This makes me wonder exactly what I could write that you wouldn't put
> down.

Something that any Forth programmer can look at and say "I couldn't
have done it any cleaner and/or simpler than that."

Andrew.

John Passaniti

unread,
Jun 12, 2010, 4:59:32 PM6/12/10
to
On Jun 12, 1:40 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> But then you present code that looks like this:
>
> : 6array ( 0 i j k l m n)
>    create  dimensions
>    does>
>    swap cells + @
>    swap cells + @
>    swap cells + @
>    swap cells + @
>    swap cells + @
>    swap cells + ;
>
> That looks like brute-force cut-and-paste programming of the kind much
> loved by some C programmers.

Nope. When people talk about "cut and paste programming" they are
talking about taking a sequence of code and repeating it (sometimes
with minor modifications) across multiple definitions. That is both
redundancy and a maintenance nightmare. This example is entirely
different. This example is the simplest and most direct expression of
what the code needs to do. One could optionally factor "swap cells +"
into a word or put the repeated terms in a loop. Both would add
slightly to the overhead and wouldn't necessarily make it any clearer,
so there doesn't seem to be much benefit to doing either.

Indeed, this kind of loop unrolling code is seen in Charles Moore's
own work. In http://www.ultratechnology.com/color4th.html he writes:

Some of the people who don't like Forth might take to this. In
20 blocks of code I have no conditional statements or loops.
Good Forth minimizes the number of conditional statements.
The minimum is zero. There are no good looping constructs in
i21. I can say

: 5X X X X X X ; : 20X 5X 5X 5X 5X ;

> If the user has to write similar
> functions such as 5ARRAY, 4ARRAY, and so forth, there is going to be a
> lot of cut and paste being done. Lets hope the user can count
> carefully, so he gets the correct number of lines of code in each
> function!

You presumably will make the same argument against Charles Moore's
code, above.

> By comparison, with my system, ARY (or <ARY>) can be used for *any*
> number of dimensions. This is bad???

That question can't be answered without considering the application
that code is in. That's a key point you keep missing in these
discussions. Generality is fine-- when it is needed. Your "novice"
package addresses multiple artificial needs:

1. That novices find the canonical definitions for arrays
incomprehensible and need a black-box library to do it for them.
2. That novices find the canonical definitions for arrays aren't fast
enough for their applications.
3. That novices need to write efficient code for accessing high-
dimension arrays.
4. That the best way to address arrays is to always provide all
indicies, each times (verses working with slices).

The actual needs of novices verses the needs you invented for them are
to understand the language, idioms, and mindset behind Forth. Once
that novice graduates to a deeper understanding of the language, they
can use insight and experience to evaluate when the idioms don't apply
to their work and where the mindset isn't beneficial (or even
harmful). Your "novice" package addresses the wrong audience.
Novices don't need what you're providing.

MarkWills

unread,
Jun 12, 2010, 6:03:42 PM6/12/10
to

>
> Something that any Forth programmer can look at and say "I couldn't
> have done it any cleaner and/or simpler than that."
>
> Andrew.

Wow. Forgive me, but that sets the bar pretty high, don't you think,
Andrew? Especially for us newbies. I must admit, I generally refrain
from posting code, apart from the odd snippet, lest it/I get ripped
for it.

:-(

Mark

Andrew Haley

unread,
Jun 12, 2010, 6:54:55 PM6/12/10
to
MarkWills <markrob...@yahoo.co.uk> wrote:
>
>> Something that any Forth programmer can look at and say "I couldn't
>> have done it any cleaner and/or simpler than that."
>
> Wow. Forgive me, but that sets the bar pretty high, don't you think,
> Andrew?

Fair point. OK, something that any Forth programmer can look at and
say "I'd be happy to go with that."

> Especially for us newbies. I must admit, I generally refrain from
> posting code, apart from the odd snippet, lest it/I get ripped for
> it.

There's a world of difference between how you treat someone asking
questions and someone making pronouncements. You are bound to get
different responses to these two approaches.

Andrew.

Rod Pemberton

unread,
Jun 12, 2010, 7:59:31 PM6/12/10
to
"John Passaniti" <john.pa...@gmail.com> wrote in message
news:010e6b4d-9d86-48ea...@k39g2000yqd.googlegroups.com...

>
> I take it you don't believe in the old adage "give a man a fish and
> he'll eat for a day, teach a man to fish, and he'll eat for a
> lifetime."
>

That adage erroneously assumes that the supply of fish is of sufficient
surplus to feed that man, no... to feed all men taught how to fish for
their entire lives. That's a bad assumption.

Let's start with tuna. Japan consumes 75% of the world's tuna: 80% of
Atlantic bluefin tuna and 50% of other tuna. The supply of tuna is clearly
insufficient to feed the world. It barely supplies a single country: Japan.
China consumes 80% of the world's egg production. China consumes 50% of
world's poultry. China has almost no arable land. China has 1.3 billion
people to feed. Do you see a problem? No? What happens when India,
Africa, and Brazil develop economically and live longer? In a short while,
it isn't going to matter much if you teach a man to fish, or raise fowl, or
farm snakes, alligators, or eels... Without China conquering Australia to
use it as farmland, the world's food supply is in dire trouble. Don't you
agree?


Rod Pemberton


Nathan Baker

unread,
Jun 12, 2010, 10:15:35 PM6/12/10
to

"Rod Pemberton" <do_no...@notreplytome.cmm> wrote in message
news:hv172c$qpf$1...@speranza.aioe.org...

If we don't quickly develop an efficient "hop off of this dang rock" method,
then the population vs. resources equation is certainly going to create some
dire developments.

Interestingly, I was reading 'jonesforth.s' and 'jonesforth.f' this past
dawn. My conclusion is that Forth is a horrible language which is only fit
for seriously "messed-up" folk. :)

Nathan.


Elizabeth D Rather

unread,
Jun 13, 2010, 3:20:14 AM6/13/10
to
On 6/12/10 8:47 AM, Andrew Haley wrote:
> Hugh Aguilar<hughag...@yahoo.com> wrote:
...

>> I get the impression that most c.l.f. members just habitually put
>> everything down. I write a generalized array definer and everybody
>> puts it down. I write a version of FIELD that more than doubles the
>> speed of a typical program, and everybody puts it down. Etc., etc..
>>
>> This makes me wonder exactly what I could write that you wouldn't put
>> down.
>
> Something that any Forth programmer can look at and say "I couldn't
> have done it any cleaner and/or simpler than that."

Exactly. The point is that, to the extent there is a "Forth way", it's
to solve the real-world problem at hand, rather than writing something
like a "generalized array definer". CREATE DOES> is simple and clean.
If there is a particular application requirement for even greater speed
(which I haven't encountered) that *might* justify a more complex
solution, but on the whole, "good Forth" consists of the simplest,
cleanest solution to the real-world problem you have, rather than trying
to imagine a problem someone else might have someday in a hypothetical
circumstance.

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Elizabeth D Rather

unread,
Jun 13, 2010, 3:24:12 AM6/13/10
to

It really depends on who it's coming from. When we get a message from a
newbie saying, "Hey, this is what I tried, is there a better way?" we're
delighted to offer suggestions. Hugh is setting himself up as such an
expert that he can disparage many years of experience on the board and
post code that he thinks people should use or emulate. When he's coming
from that POV, we expect Andrew's criteria to be met.

ken...@cix.compulink.co.uk

unread,
Jun 13, 2010, 4:22:43 AM6/13/10
to
In article <pradnTAXH5CvS47R...@supernews.com>,
andr...@littlepinkcloud.invalid (Andrew Haley) wrote:

> It's an example of how you might write a six-dimensional array, if any
> application actually needed one.

Actually a six-dimensional array is useful for orbital mechanics. It
takes six numbers to describe position and the rotations of an object.
There is by the way an explanation of thin in Anathem by Neal Stephenson.

Ken Young

John Passaniti

unread,
Jun 13, 2010, 4:54:40 AM6/13/10
to
On Jun 12, 7:59 pm, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:
> [...]
>
> Don't you agree?

Gee, all this time, I thought the "teach a man to fish" adage was a
metaphor about empowering others by education and sharing knowledge.
Now, thanks to you, I learn it is actually a naive comment on the
state of water-based agricultural systems.

MarkWills

unread,
Jun 13, 2010, 5:34:46 AM6/13/10
to
On 13 June, 00:59, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
wrote:
> "John Passaniti" <john.passan...@gmail.com> wrote in message

Wow! CLF is notorious for drifting off topic on most threads due to
its somewhat conversational nature, but your post just takes the
biscuit Rod! That is just about the most random off-shoot into never-
never land that I have yet to encounter! Or a pedantry master-class.

Perhaps it was very high-brow humour, the type where one really needs
to see ones face in order to understand that irony/sarcasm/humour is
being purveyed. If so, I'm afraid it was lost on me.

Andrew Haley

unread,
Jun 13, 2010, 6:01:14 AM6/13/10
to
In comp.lang.forth MarkWills <markrob...@yahoo.co.uk> wrote:
> On 13 June, 00:59, "Rod Pemberton" <do_not_h...@notreplytome.cmm>
> wrote:
>> "John Passaniti" <john.passan...@gmail.com> wrote in message
>> news:010e6b4d-9d86-48ea...@k39g2000yqd.googlegroups.com...
>>
>> > I take it you don't believe in the old adage "give a man a fish and
>> > he'll eat for a day, teach a man to fish, and he'll eat for a
>> > lifetime."
>>
>> That adage erroneously assumes that the supply of fish is of sufficient
>> surplus to feed that man, no... ?to feed all men taught how to fish for

>> their entire lives. That's a bad assumption.
>>
>> Let's start with tuna. [deletia]

>>
>> Rod Pemberton
>
> Wow! CLF is notorious for drifting off topic on most threads due to
> its somewhat conversational nature, but your post just takes the
> biscuit Rod! That is just about the most random off-shoot into never-
> never land that I have yet to encounter! Or a pedantry master-class.

Note also the mischievous cross-posting into alt.lang.asm. Trolling,
perhaps?

Andrew.

Andrew Haley

unread,
Jun 13, 2010, 6:19:19 AM6/13/10
to

Mmmm, are you sure that's a six dimensional array, not just a six
dimensional number?

Andrew.

ken...@cix.compulink.co.uk

unread,
Jun 14, 2010, 4:02:03 AM6/14/10
to
In article <vPydnaKbYL86LYnR...@supernews.com>,
andr...@littlepinkcloud.invalid (Andrew Haley) wrote:

> Mmmm, are you sure that's a six dimensional array, not just a six
> dimensional number?

A six dimensional number for a fixed position and one object. But if
the object is moving or there is more than one it makes sense to use an
array. On the other hand no more than one or two dimensions are likely
to be changing at the same time.

Still in most solutions for orbital mechanics it should be possible to
ignore yaw, pitch and roll giving just three dimensions.

Ken Young

It is loading more messages.
0 new messages