Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Forth is broken by culture?

143 views
Skip to first unread message

Helmar

unread,
Jul 22, 2010, 4:44:30 PM7/22/10
to
Hi all,

I'm a long time Forth user and well, also implementor. Forth is part
of my access to computing technology.

I can say that a "system of Forth" is not broken by design. One can do
nice things with it and it's free form from the view of concepts you
can use with it. But this nice person has to make the step to
understand the principles of Forth. Very simple, but basically the
problem.

Today I see people "programming" computers. But not in the way done
before. Forth in it's expressiveness was bound to words - like
"sentences" or "writing a text". It's not simply "clicking" or
"touching" something. Of course other languages are also bound to
textuality. But they at least do not force you to change something you
learned from your teacher about mathematics. RPN is something new for
most newbies.

Has the kind of minimalism that is a basic of Forth any future? I
basically have problems to think this way - but well, the question is
here.

I do not think that Forth is broken by bad technology - it's broken by
culture. It's impossible to find agreements, it's impossible for
industry to allow idiosyncratic languages. The corsage by languages
like C & friends is what is required by the decision makers.
Everything flexible will be lost compared to a system that declares
rules everyone can follow. This is why we have in politics
dictatorship or democracy and not a system of "real" freedom.

So we can say that Forth is too flexible/powerful/... for this
culture? Maybe we say that Forth was to ahead of its time?! When is
the time of Forth.

If you ask me, we need to change the strategies to justify Forth from
implementation details to freedom of implementation. It's not needed
to make advertise with small memory footprint. This is true for all
good implementations in any language. What we have to advertise is the
freedom of semantic layers we can add to the language. No concept of
programming is alien to Forth. What language can say such things?

So, while sadly I sadly notice the amount of spam in this list, I
finish this troll-post now and see forward to some answers. Random
annotations to Forth are welcome.

Regards,
-Helmar

Professor

unread,
Jul 22, 2010, 6:14:00 PM7/22/10
to
On Jul 22, 3:44 pm, Helmar <hel...@gmail.com> wrote:
> Today I see people "programming" computers.
> Has the kind of minimalism that is a basic of Forth any future?
> I do not think that Forth is broken by bad technology - it's broken by
> culture. It's impossible to find agreements, it's impossible for
> industry to allow idiosyncratic languages.

I have noticed that software engineers use 'C' for high level
applications, and hardware engineers use forth for low level
applications. I have also noticed that very advanced hardware and
software engineers use 'C' or forth which ever is more appropriate
than the other for the task at hand.

I don't think anything is broken. There are simply tools in the tool
box, and we use what right for the job. And if I have a hammer,
everything looks like a nail.

There are multiple dialect of any language, C and forth and everything
else. Its not necessary a disagreement so much as they just talk
funny over there, according to these ears.

Decision makers that need software engineers for high level apps
choose C because there are a lot of them available, its easier.

Decision makers that know they are the SAME person that has to control
this new electron microscope choose forth, its easier. Decision makers
that need to get this hardware diagnosed and working 'now' choose
forth, its easier.

> Maybe we say that Forth was to ahead of its time?! When is the time of Forth.

Forth has always been ahead of its time, but maybe that is not the
issue. Perhaps its more like 'one does not use a micrometer when a
shovel is called for'

pablo reda

unread,
Jul 22, 2010, 11:12:50 PM7/22/10
to
Hi Helmar

I think, if Charles Moore has not invented (discover?) Forth when 2KB
is much memory, nobody can invented (discover?) now.

This simple machinery of words, keep too much secrets, no others
languages modify the art of programing, when I program in C, the
program not change my vision of the problem, when I program in forth,
this code change my point of view, the analysis change when code !!
very extrange and powerfull.

Another one, when finish a program in C, this is very dificult to
evolve, in forth is very common integrate many programs or modify
important parts and continue work, I remake many low level words and
all the system work better !!, the notion of library is not the same
here.

sure the market, the economy and the capitalims not worry about that
and other forces keep the programers in c#++ and Tea-Java but..I am
sure, the next step in computers is in a kind of forth. I don't know
when, perhaps far years ago

In my humble opinion

van...@vsta.org

unread,
Jul 22, 2010, 11:59:18 PM7/22/10
to
Professor <prof....@gmail.com> wrote:
> I have noticed that software engineers use 'C' for high level
> applications, and hardware engineers use forth for low level
> applications.

C is very much on a decline--it's too expensive to develop in it, too
difficult to reuse code, too hard to make it really secure, and too hard to
debug it when Bad Things happen. As a machine independent assembler, it's
great. As a higher level language, it's pretty weak.

As an assembly language for a stack computer, Forth is very nice. With its
dictionary to tabulate names to locations, it can approach the expressiveness
of C for a far smaller price in system complexity.

But both of them are far behind the power and expressiveness of modern
languages, and that gap is growing. You just will not believe how quickly
you can assemble functionality in some of these languages until you really
live with them. My own startup is running with Python, and I would never,
ever again want to consider developing code this complex on top of a language
which demanded as much manual programmer effort as does C or Forth. Python
runs amazingly fast, and lets me focus on the data structures and algorithms
while it handles all the BS which used to dominate my programming time.

I'll always have a fond spot in my heart for Forth. But the world has
changed a *lot* since Forth was the "sweet spot" for my own programming.

--
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html

Professor

unread,
Jul 23, 2010, 12:31:16 AM7/23/10
to
On Jul 22, 10:59 pm, van...@vsta.org wrote:

Is there micro-controller version of python? If it been ported to the
ATMEGA family or the propeller, I'd love to give it a try. These are
two of the micros I'm using for this project.

Paul Rubin

unread,
Jul 23, 2010, 2:07:34 AM7/23/10
to
Professor <prof....@gmail.com> writes:
> Is there micro-controller version of python? If it been ported to the
> ATMEGA family or the propeller, I'd love to give it a try. These are
> two of the micros I'm using for this project.

It's not really feasible to run Python on that class of processor. It
can be used in larger embedded systems like the ARM controllers found in
media players, digital cameras, mobile phones, etc. A somewhat smaller
language in the same general spirit, called Lua (www.lua.org), has also
gotten traction on those platforms because of its smaller size. Python
works best on desktop or server class hardware. It gains you a lot of
productivity and code reliability, but at some cost in memory and CPU
cycles.

But, why do you want to use an Atmega or Propeller anyway, unless you
have some hardware constraints that require something like that?
Otherwise it's sort of like trying to cross the ocean in a canoe. It
might be an interesting adventure but most people would take the
practical approach of just buying a plane ticket.

Elizabeth D Rather

unread,
Jul 23, 2010, 3:25:42 AM7/23/10
to
On 7/22/10 8:07 PM, Paul Rubin wrote:
> But, why do you want to use an Atmega or Propeller anyway, unless you
> have some hardware constraints that require something like that?
> Otherwise it's sort of like trying to cross the ocean in a canoe. It
> might be an interesting adventure but most people would take the
> practical approach of just buying a plane ticket.

Two words: Unit costs. If you're making a product that will produce is
vast quantities, saving even $0.01/unit makes a big difference in your
potential market price and overall produce success. The fact that
8051's sell for $0.05 or less keeps them numerically dominant in many
markets.

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310.999.6784
5959 West Century Blvd. Suite 700
Los Angeles, CA 90045
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Paul Rubin

unread,
Jul 23, 2010, 3:44:29 AM7/23/10
to
Elizabeth D Rather <era...@forth.com> writes:
>> But, why do you want to use an Atmega or Propeller anyway, unless you
>> have some hardware constraints that require something like that?
> Two words: Unit costs. If you're making a product that will produce
> is vast quantities,

Sure, that would count as a (legitimate) hardware constraint. For
low-quantity targets, though, it doesn't make much sense. And while the
Professor's requirements weren't clear, they didn't come across as
involving enormous quantities.

Even for an 8-bit target it will often make sense to prototype the code
on a desktop computer (in Python, say), get all the features and
protocols solidified, then re-implement on the target platform in Forth
or C or whatever.

MarkWills

unread,
Jul 23, 2010, 7:31:16 AM7/23/10
to
Interesting topic.

Broken by culture is probably a good way to express the point you are
making. Though I would say the sentiment could equally apply to other
'fringe' languages.

I think Chuck Moore, and later, Jeff Fox hit the nail on the head:

Forth is an excellent language for individual programmers to use.
Indeed, Forth is simple enough that an entire Forth *system*
(interpreter, compiler, io, etc) can be written and understood by a
single human.

Forth projects work well when they are single man projects. There is
also plenty of evidence to suggest that 'mid-size' Forth projects can
be successful, as evidenced by Forth Inc and MPE.

Trouble is, nobody is interested in that. Big sells. Department
managers and project managers don't want a three-man programming team.
They want a 300 man programming team? Why?

Ego.

When you are responsible for a 300 man programming team, the like of
which you will find in IBM, EDS, Cap Gemini, ICL et al, you are a big
player in your company. You get large budgets. You get invited to the
boards parties, if your project is a success, you might be invited
onto the CEOs yacht or given a trip in his lear jet. It sounds
ridiculous, but this does happen.

In addition, the larger and more complex something *appears* to be,
the more money you can charge for it. If a client rolls up to your
premises to discuss a bespoke software application, and you are
working from the loft of your house, he is going to laugh at you if
you try to ask him for half a million dollars. If you are working from
custom, plush premises with lots of people walking around holding bit
of paper, he won't bat an eyelid.

A lot of people reading this post will work in, or will have worked in
large companies. Whilst doing so, they will have worked on projects
where they *know* that they could have worked on the entire thing
themselves, and done a better job, in a fraction of the time. I have
indeed just demonstrated that at the place where I work. Some 6 months
ago I wrote a simulator system in 8 weeks, from scratch. The previous
version had previously been worked on by a team of 8 people, for 4
years. My system works. Theirs works too, but it is buggy and fragile,
with a vile user interface (in fact there is no user interface, you
configure it by hand writing XML).

You would think that the company would be delighted. But in actual
fact, the opposite has happened. It has pissed a lot of people off.
Pride has been hurt. Vested interests have been damaged. Empires put
at risk. Top management invited me to go and speak to them. They
wanted to know how I could write something better by myself, than an
entire team. My answer? *Because* I was by myself. The team
programmers are constrained by documentation, documentation,
documentation, design reviews, intermediate reviews, clarification
phases, changing specifications, and, worse of all, work-front
demarcation. None of the 11 guys on the team has a picture of the
entire project. None of them can debug the others code. What has
resulted is a code base that is a mess, that will only build on an old
Visual Studio 6 compiler, where no one person knows how the thing
works, to the point that the departure of a single member of staff to
pastures new can represent a serious risk to the company.

However, this is NORMAL. In my experience, most in-house software
projects are very delicate, and work by luck, more than judgment!
Everyone sets out with good intentions, but things slip and slide.
Work hours are over scrutinised, and over managed. For example, if you
suggested to your project manager "We'll do one code iteration, learn
from it, ditch it, and start again" you'd be directing cars in the car
park pretty soon! However, that is the secret to how my software was
better than an entire teams' code. I had the benefit of hindsight, and
could see where they had gone wrong. I'm sure if they were allowed to
dump their code, instead of putting filler on top of filler, they'd
come up with something much better. Management won't allow it however,
because to 'start again' is a sign of failure. A shame.

I think the long and the short of it is that large corporations have
to build complexity into their operations, to be a large corporation.
That's *why* they are large corporations! It's self fulfilling. If you
want to be a big player in the market, and on the LSE or NASDAQ, you
better employ 10,000 people to push documents around, or no-one is
interested.

Contentious and cynical, I know. But I'm only describing what I have
observed over the years. My recent trip to China was a real eye
opener. Here, in Europe, we put the cart before the horse. This is due
mostly to ISO2000/2001 rules. They make an entire project document
based and document driven. Documents are king. You will 'design' your
widget with documents, which will be peer reviewed, and client
reviewed, before the build starts. Only then will you realise (because
nobody noticed) that the threads on the bolts were imperial, but the
buyer ordered 10000 metric nuts. No problem. You will write a design
change document. And a method statement document to describe how you
will correct the error. The documents you will write will be reviewed,
comments put in, which you have to action, send for re-review etc in a
game of document tennis. This is of course all good, because it puts
asses on seats, and keeps legions of terminally-bored people in jobs
reading design documentation for something that they will never
actually lay their eyes on!

Meanwhile, the Chinese have built their system, ironed out the
problems, and are busy writing the documents which describe it.
There's no way we can compete with that. We write "this is what we are
going to deliver" type documents. And we ALWAYS miss things, and make
mistakes which incur penalties and schedule slippage. The Chinese give
you a turn-key system with a doc-pack and say "This is what we've
built".

I wish I could find the Chuck Moore quote. He put it very much more
eloquently, and succinctly than I did!

Regards

jacko

unread,
Jul 23, 2010, 10:48:17 AM7/23/10
to
A truism of Lobb's theorem.

Will the Higgs boson be found or do electrons have volume? (Currently
mutually incompatable)

1) Even if no boson is found, there will be another funding round.
2) Electrons have volume, develop a unified theory.

Now option 1 seems like a turnover 'winner', It also keep a more
comlex non-commutative maths as entry gate to the esheleon.

Digging holes and filling them in again is what welfare to work is all
about these days.

Cheers Jacko

van...@vsta.org

unread,
Jul 23, 2010, 10:56:43 AM7/23/10
to
Paul Rubin <no.e...@nospam.invalid> wrote:
> Sure, that would count as a (legitimate) hardware constraint. For
> low-quantity targets, though, it doesn't make much sense. And while the
> Professor's requirements weren't clear, they didn't come across as
> involving enormous quantities.

And more and more of "embedded" is using processors which could easily run a
Python virtual machine (or some other comparable run-time for a higher level
language--I didn't want to make this about any particular language). My
background is OS, systems, and embedded, and my coworkers have similar
backgrounds. All of us agree that it's been a win to be able to use higher
level languages for an ever-increasing percentage of our programming tasks.

John Passaniti

unread,
Jul 23, 2010, 11:23:41 AM7/23/10
to
On Jul 22, 4:44 pm, Helmar <hel...@gmail.com> wrote:
> If you ask me, we need to change the strategies to justify
> Forth from implementation details to freedom of implementation.
> It's not needed to make advertise with small memory footprint.

I really wish this newsgroup would come to terms with two very simple
ideas:

The first idea is that different kinds of programming demand different
kinds of tools. The programmers creating systems that handle sales
transactions don't have the same needs or interests as programmers
doing console games; the programmers doing high-speed digital signal
processing don't have the same requirements as programmers doing page
layout software. All this should be obvious, but I constantly come
across programmers who collapse wildly different problem domains into
the same thing-- and often whatever domain they're familiar with. You
have people in comp.lang.forth go out of their way to optimize code
for space and/or time, and while that may be vital for one platform or
problem domain, it may be completely irrelevant for others. People
need to learn to identify what is important for a particular platform
or problem domain, and not assume that everyone shares the same needs.

The second idea is that Forth doesn't have to be everywhere to be
successful. Many languages target specific problem domains and they
aren't seen as a failure because of it. If you're doing symbolic
processing, the Lisp family of languages is often the tool of choice,
but you'll probably never see Lisp used in industrial control
applications. Perl is popular for text processing tasks, but you
would never consider it for real-time digital signal processing of
video streams. So what is so terrible about Forth's primary domain
being embedded systems? Why not embrace that and freely admit that
there are other languages that can beat the pants off Forth for
certain classes of problems?

> This is true for all good implementations in any language.
> What we have to advertise is the freedom of semantic layers we
> can add to the language. No concept of programming is alien to
> Forth. What language can say such things?

Umm, no.

First, those "semantic layers" represent real value to the
programmer. Say I'm using a language that provides a service like
garbage collection. That "semantic layer" allows me to write code in
a dynamic style without the same constraints I would have in a
language that doesn't offer that. Now, that may be completely
inappropriate for a small embedded system, but again, use the right
tool for the job. The question shouldn't be if you can arrive at a
solution with a language without "semantic layers," because you can.
The question should be how efficiently can you do so? How expressive
will the code be? Will your code be deluged by low-level details
which obscure what you're trying to do?

Second, Forth has both feet planted firmly in the imperative
procedural language world. It's true that you can *add* features from
other programming paradigms, but that's true of *any* language. It
might look a bit more pretty in Forth because you have some
flexibility with creating syntax, but in terms of functionality, Forth
doesn't provide anything unique.

Third, part of the motivation for Forth was that Charles Moore wanted
to replace a range of languages with a single language. And in that,
he simultaneously succeeded and failed. He succeeded in creating a
language that gives low-level power and control; it's a language that
can be extended however the programmer wants. But the failure is that
when you do extend Forth, you no longer have Forth; you have a hybrid
of Forth and the syntax and semantics you've added. When done well,
you have a language that has a close relationship to the problem
domain. But it isn't Forth. And because of that, you haven't
eliminated languages as Charles Moore wanted to do-- you've created
another new language.

Here's a real-world illustration from some code I wrote:

filter default bw12 hpf 80

Okay, this language was built on Forth. Tell me what it does. Tell
me the semantics. If you can, you've probably been looking over my
shoulder for the past week. This defines a new filter structure named
"default' that is a 12th order Butterworth high-pass filter with a
frequency of 80Hz. In terms of semantics, it creates a data structure
and adds a precomputed number of DSP cycles to a global counter, and
would warn if the DSP was over-committed.

That's Forth-- or at least, it's a domain-specific language built on
Forth. But the fact it was built on a Forth doesn't mean that I've
eliminated "semantic layers." Indeed, by creating this language, I've
created new layers that someone reading this code must understand.
This is what I mean when I say that Forth failed in regard to
eliminating language. When you write code in Forth, you're creating a
new language; the fact it's based on Forth is nothing special. I
could have done the same thing in C (and indeed, in the past, did):

Filter* default = createFilter(BW12, HPF, 80);

The fact this is in C doesn't help you much with understanding what
this code does (although it does give you a lot more clues than the
Forth version. All non-trivial programming is creating new
abstractions and unless you understand the syntax and semantics of
those abstractions, you're lost.

John Passaniti

unread,
Jul 23, 2010, 11:47:55 AM7/23/10
to
On Jul 23, 3:44 am, Paul Rubin <no.em...@nospam.invalid> wrote:
> Even for an 8-bit target it will often make sense to prototype
> the code on a desktop computer (in Python, say), get all the
> features and protocols solidified, then re-implement on the
> target platform in Forth or C or whatever.

Not in my experience. I might certainly prototype *small* parts of
the code in another language and then later translate back to C or
Forth or whatever. An example would be some work I did last year on
reverb algorithms. I modeled and tested the algorithms using
PureData, but the rest of the application-- the control aspect, the
communication protocols, the user interface-- all of that was done
either on the target itself or in a simulation of the target.

Part of the reason why I typically don't prototype the entire system
in a different language and then translate back to a target language
is because most embedded systems require a clear understanding of the
limits of the platform. I'm not going to design and prototype in a
language like Python where I have access to rich libraries and high-
level constructs and then confidently say, "well, it's just a matter
now of translating to the target." Hardware has a lead time. Memory
has a cost. Processor cycles matter. I need to tell the hardware
designers weeks before I get boards that I need a certain processor of
a certain size and certain capability. That means I need a very good
idea of what I'm going to need well before I need it. If I'm
prototyping the whole system in a different language-- especially a
high-level language, I lose that ability (except maybe a gut-feel).

I've been in the situation before where I didn't do this-- where as I
worked through a design, I found I needed a lot more memory in the
system or a faster processor. The results were always the same; more
memory might mean a different package and support hardware. A faster
processor might mean different clock frequencies requiring different
layout and recertification for EMI. A simple change can have ripple
effects that can delay release of a product by weeks or months.
Meanwhile, you've lost time-to-market and you've pissed off
distributors for your product who thought they were going to see
product in the channel earlier.

I work for a company that doesn't make huge quantities of products (a
big product may be a few thousand units per year). But that doesn't
mean I can just drop the biggest, fastest micro in a system.
Sometimes time-to-market matters, but more often saving a buck because
we could use a "right-sized" processor instead of an overkill solution
is what drives profitability.

John Passaniti

unread,
Jul 23, 2010, 12:47:06 PM7/23/10
to
On Jul 23, 7:31 am, MarkWills <markrobertwi...@yahoo.co.uk> wrote:
> A lot of people reading this post will work in, or will have worked in
> large companies. Whilst doing so, they will have worked on projects
> where they *know* that they could have worked on the entire thing
> themselves, and done a better job, in a fraction of the time.

And sometimes this perception is dead wrong, driven by the
individual's over-confidence and lack of understanding of the real
complexities of the development process.

I have no reason to believe your statement isn't the truth-- for a
particular company, on a particular product or service. The danger I
see is the assumption that just because a development team is large
that it necessarily is a problem. That's it's always about ego and
"bigger is better." Sometimes a large development team is large
because it needs to be. Where I work, there are a *lot* of
disciplines that go into making a product. The pile of electronics on
my desk was created by people with specific expertise in DSP,
numerical analysis, low-level hardware, control theory, communications
protocols, user interfaces, domain-specific experience, and so on.
And that doesn't even address the analog electronics, and the desktop
software that controls it, both of which each rely on people with
specific experience. There are precious few multi-disciplinary people
alive in the world today who could by themselves pull off most of the
products I've worked on over the past.

I've been guilty of under-estimating the inherent complexity of work
in the past. Fueled by over-confidence that because I was damn good
at certain classes or programming, that I was good at all of it.
Thankfully life has a way of slapping you down and reminding you of
your limits. Such experiences also have a way of making you respect
the expertise and talent of others.

> However, this is NORMAL. In my experience, most in-house software
> projects are very delicate, and work by luck, more than judgment!
> Everyone sets out with good intentions, but things slip and slide.
> Work hours are over scrutinised, and over managed. For example, if you
> suggested to your project manager "We'll do one code iteration, learn
> from it, ditch it, and start again" you'd be directing cars in the car
> park pretty soon!

No, not really. The various "agile" methodologies are rapidly
becoming the norm, with their focus on repeatable formalized testing
and constantly evolving software (instead of "big design up front").

But even when someone is stuck in a company that will out-of-hand
reject the idea "build one to throw away" a smart software developer
knows where to pick their battles and go for an incremental
refactoring approach. I'm facing this now-- I inherited a code base
that on the whole is pretty terrible. And given the choice, I would
scrap it all, learn from what worked, and rewrite it. But I don't
have that option-- not because I have a pointy-haired boss who is
hounding me, but because I have a sense of responsibility. My choices
in how I manage my time have a direct impact on what products
eventually hit the shipping dock. So what I did was to prioritize
those changes, identifying things that must change now and things that
can be addressed later. And that is what directs me as I improve the
code.

> Management won't allow it however,
> because to 'start again' is a sign of failure. A shame.

Sometimes, the failure isn't management, but the software developer's
ability to communicate the true cost. I've been in the situation
before where I made claims about the quality of code and that I wanted
to rewrite major parts from scratch. I went to a manager who heard my
arguments, but rejected them. He didn't do that because he mindlessly
wanted to maintain the status quo. He did that because I failed to
present to him a cost-based argument that he could understand. When I
went back and framed the issue not in terms of my subjective
evaluation of the code, but in terms of support costs and development
time lost spent addressing past sins, he instantly got what I was
saying. And he was able to then justify the time I estimated for the
rewrite.

All too often in this newsgroup, we're presented with a Dlibert-like
stereotype of management. And sure, there are companies that suffer
under that. But my experience suggests the problem is more commonly
one of communication and increasingly when I hear stories about waste
and stupidity of management, I have to ask what role miscommunication
and hubris from the software development staff may play. In companies
where management is indeed only concerned with the bottom line, then
it becomes the responsibility of the software developers to frame
their suggestions in terms of the bottom line. If they don't, you
can't blame management.

> I think the long and the short of it is that large corporations have
> to build complexity into their operations, to be a large corporation.
> That's *why* they are large corporations! It's self fulfilling. If you
> want to be a big player in the market, and on the LSE or NASDAQ, you
> better employ 10,000 people to push documents around, or no-one is
> interested.

Sometimes. And sometimes those documents matter. I know a software
engineer who works for a company that makes instruments for
airplanes-- things like altimeters. He showed me a filing cabinet
that was filled with paperwork and documentation for just one of the
products he worked on. I was shocked at the level of detail and
traceability and I suggested that most of his job seemed to be pushing
paper around. He then asked me if I would prefer to be in an airplane
where such documentation wasn't behind each system and where software
engineers were free to slap together designs ad hoc. I shut up at
that point.

Hugh Aguilar

unread,
Jul 23, 2010, 5:35:27 PM7/23/10
to
On Jul 23, 5:31 am, MarkWills <markrobertwi...@yahoo.co.uk> wrote:
> Broken by culture is probably a good way to express the point you are
> making. ...

Everything you said in your post was true!

> Work hours are over scrutinised, and over managed. For example, if you
> suggested to your project manager "We'll do one code iteration, learn
> from it, ditch it, and start again" you'd be directing cars in the car
> park pretty soon!

Or become a cab-driver!

Paul E. Bennett

unread,
Jul 23, 2010, 6:02:40 PM7/23/10
to
MarkWills wrote:

> Interesting topic.
>
> Broken by culture is probably a good way to express the point you are
> making. Though I would say the sentiment could equally apply to other
> 'fringe' languages.

[%X]

> Trouble is, nobody is interested in that. Big sells. Department
> managers and project managers don't want a three-man programming team.
> They want a 300 man programming team? Why?
>
> Ego.

It is an Ego that is driven by, and continue to drive, the one-upmanhip of
the office politics.



> When you are responsible for a 300 man programming team, the like of
> which you will find in IBM, EDS, Cap Gemini, ICL et al, you are a big
> player in your company. You get large budgets. You get invited to the
> boards parties, if your project is a success, you might be invited
> onto the CEOs yacht or given a trip in his lear jet. It sounds
> ridiculous, but this does happen.

One wonders whether or not they would get the notion of the "surgical team"
approach.



> In addition, the larger and more complex something *appears* to be,
> the more money you can charge for it.

Also, the more money they need to charge for it to support the huge teams
they employ.

[%X]



> A lot of people reading this post will work in, or will have worked in
> large companies. Whilst doing so, they will have worked on projects
> where they *know* that they could have worked on the entire thing
> themselves, and done a better job, in a fraction of the time.

[%X]



> You would think that the company would be delighted. But in actual
> fact, the opposite has happened. It has pissed a lot of people off.

[%X]

> .............. I had the benefit of hindsight, and


> could see where they had gone wrong. I'm sure if they were allowed to
> dump their code, instead of putting filler on top of filler, they'd
> come up with something much better. Management won't allow it however,
> because to 'start again' is a sign of failure. A shame.

I have managed to get a complete restart of software development for a
project (non-Forth) but it took a week long review meeting to get the
management to see that the previous two years development had been a total
waste of time. I started again and still completed the software development
within the remaining two years of the development timetable. The restart was
a truly magnificent benefit. In my own development process I consider that
it is reasonable to have three iterations. Pass one gets the basic ideas
sorted, pass two improves it, pass three makes it fit for production.



> I think the long and the short of it is that large corporations have
> to build complexity into their operations, to be a large corporation.

They don't "have" to build the complexity in. They just seem to do so.

> Contentious and cynical, I know. But I'm only describing what I have
> observed over the years. My recent trip to China was a real eye
> opener. Here, in Europe, we put the cart before the horse. This is due
> mostly to ISO2000/2001 rules.

Did you mean ISO9001:2000?

> They make an entire project document
> based and document driven.

Why are you against documentation. My own process is very much document
based and it works well, is efficient and effective at ensuring development
of safe systems. It is, however, a simple basic process that works well
hierarchically and is scalable to any size project. Surgical teams for
development are encouraged and the documents to be generated are quite
clearly specified. The process generates a good audit trail that enables
process quality monitoring as a by product of its use.

> Documents are king. You will 'design' your
> widget with documents, which will be peer reviewed, and client
> reviewed, before the build starts.

That is the right way to do it for Mission Critical Systems which clients
are going to truly depend on. Otherwise, the system might perform the right
mission activities or may perform them incorrectly, and the discovery will
be at delivery acceptance when it is too expensive to change anything. This
leads to unhappy clients. This does mean that clients need to be clear about
what they need and developers need to be clear on what they are providing.

--
********************************************************************
Paul E. Bennett...............<email://Paul_E....@topmail.co.uk>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

Hugh Aguilar

unread,
Jul 23, 2010, 7:34:32 PM7/23/10
to
On Jul 23, 4:02 pm, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
wrote:

> Why are you against documentation.

Documentation after the software is written is a good idea, and the
sooner the better.

Documentation *before* the software is written does more harm than
good.

Paul Rubin

unread,
Jul 24, 2010, 2:17:59 AM7/24/10
to
John Passaniti <john.pa...@gmail.com> writes:
> I'm not going to design and prototype in a
> language like Python where I have access to rich libraries and high-
> level constructs and then confidently say, "well, it's just a matter
> now of translating to the target."

I guess it's application and target dependent but this generally hasn't
been a big problem in .stuff I've done. It's of course necessary to
keep the target requirements in sight (don't just write a
very-high-level system) and it probably helps that I haven't had to
target the smallest cpu's. YMMV of course.

> Hardware has a lead time. Memory has a cost. Processor cycles

> matter.... I work for a company that doesn't make huge quantities of
> products (a big product may be a few thousand units per year).... but


> more often saving a buck because we could use a "right-sized"
> processor instead of an overkill solution is what drives
> profitability.

I haven't (so far) had much to do with small cheap custom hardware and I
don't understand the economics of it that well. I do know that desktop
computer RAM costs about 3 cents per megabyte, so it seems odd that
micros with more than a few kilobytes are quite expensive. Maybe this
is a temporary situation. The big guys (mobile phone makers etc) seem
to have the problem solved. I've heard something like 5 billion ARM
cores will ship next year, mostly in phones. I see at rockbox.org that
cheap Sandisk mp3 players use a SOC with an ARM core, DSP extensions,
320KB of ram, and all the mixed signal audio stuff on the same part.
But hobby-level boards seem stuck at the very low end (Arduino = AVR
with 1k or so) or high end (Gumstix/Beagleboard = basically netbook
guts, with super-fast power hungry XScale processor, 64MB ram, wireless
networking, etc). I'm not sure why any of this is. It would make
many things simpler and easier if there was a middle level available.

John Passaniti

unread,
Jul 24, 2010, 2:22:38 AM7/24/10
to
On Jul 23, 7:34 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> > Why are you against documentation.
>
> Documentation after the software is written is a good idea, and the
> sooner the better.
>
> Documentation *before* the software is written does more harm than
> good.

That depends entirely on what "documentation" means. Modern agile
methodologies do document use cases and "user stories", capturing what
the system should do. That kind of documentation is vital for
projects to keep everyone focused on what they are supposed to be
delivering, and they serve as a way to identify scope creep and
requirements changes. Another kind of documentation that can be
important to have are test plans. Architectural documentation is less
important, since it can change, but even there, it can serve to
capture the initial ideas behind a design, which can be useful later
in post-mortem reviews of the project.

I generally find most UML diagrams to be a waste and prefer to
describe systems in terms of larger design patterns. But some UML
diagrams (like statecharts) can very nicely capture complicated
behavior visually.

Code documentation for most languages usually needs to be little more
than a description since automatic documentation generators can
extract information from function signatures. That isn't possible in
Forth, and so stack effect diagrams are needed. Whenever possible,
code documentation needs to be automated.

How much any documentation matters also depends on how many people are
working on the code. As the number of people increase, it becomes
more important to keep everyone in sync. Often the people who think
documentation is a waste of time are those who are "lone wolf"
programmers who don't have to communicate the design. When these
people start working in groups, they instantly see the value.

Andrew Haley

unread,
Jul 24, 2010, 4:22:25 AM7/24/10
to
Paul Rubin <no.e...@nospam.invalid> wrote:
>
> I haven't (so far) had much to do with small cheap custom hardware
> and I don't understand the economics of it that well. I do know
> that desktop computer RAM costs about 3 cents per megabyte, so it
> seems odd that micros with more than a few kilobytes are quite
> expensive.

Well, hold on. How much does one megabyte of RAM cost? Not a
gigabyte stick, just one megabyte.

Andrew.

Paul Rubin

unread,
Jul 24, 2010, 4:42:11 AM7/24/10
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>> I do know that desktop computer RAM costs about 3 cents per megabyte,
>
> Well, hold on. How much does one megabyte of RAM cost? Not a
> gigabyte stick, just one megabyte.

You can't buy a megabyte of ram all by itself, any more than you can buy
a floating point multiplier by itself. You instead get several MB of
ram on the desktop CPU chip at no extra charge, described as an L2
cache. That is where small quantities of ram belong, on the same die as
the CPU, just like with an AVR.

For 50 cents or so you can get an AVR microcontroller with a few K of
flash and 1k or so of ram. Per Rockbox.org I see that cheap Sandisk mp3
players use something beefier: an SOC containing an ARM core, 512MB of
flash, 320KB of ram, DSP extensions for media codecs, plus mixed-signal
stuff (hi-fi DACs and a power amp) specific to audio players. Low end
mobile phones probably use something comparable. It is nicely
intermediate between the AVR style part, and something like a Gumstix
processor. My hope is that a version without the special analog and DSP
stuff could be the $5 range or lower even in fairly small quantity. It
can run languages like Lua or Java (J2ME) rather nicely, making life
pleasant for the programmer and allowing safe sandboxing of user
scripts. My low end Canon digicam uses something similar, making CHDK
(chdk.wikia.com) possible.

I wonder why it is that chips like this aren't available (or at least
widely used) in AVR-like packaging.

Andrew Haley

unread,
Jul 24, 2010, 5:00:48 AM7/24/10
to
Paul Rubin <no.e...@nospam.invalid> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>>> I do know that desktop computer RAM costs about 3 cents per megabyte,
>>
>> Well, hold on. How much does one megabyte of RAM cost? Not a
>> gigabyte stick, just one megabyte.
>
> You can't buy a megabyte of ram all by itself, any more than you can
> buy a floating point multiplier by itself. You instead get several
> MB of ram on the desktop CPU chip at no extra charge, described as
> an L2 cache. That is where small quantities of ram belong, on the
> same die as the CPU, just like with an AVR.
>
> For 50 cents or so you can get an AVR microcontroller with a few K
> of flash and 1k or so of ram. Per Rockbox.org I see that cheap
> Sandisk mp3 players use something beefier: an SOC containing an ARM
> core, 512MB of flash, 320KB of ram, DSP extensions for media codecs,
> plus mixed-signal stuff (hi-fi DACs and a power amp) specific to
> audio players. Low end mobile phones probably use something
> comparable. It is nicely intermediate between the AVR style part,
> and something like a Gumstix processor. My hope is that a version
> without the special analog and DSP stuff could be the $5 range or
> lower even in fairly small quantity. It can run languages like Lua
> or Java (J2ME) rather nicely, making life pleasant for the
> programmer and allowing safe sandboxing of user scripts.

Well, hold on now. You're assuming that a large footprint bondage-
and-discipline language will make life more pleasant for the
programmer of such a device. You surely can't expect much sympathy
for such a position from this group! :-)

The core question is whether, for the programmer of an embedded
device, all that stuff (the OS kernel, the language support, the
sandboxing, etc) is part of the problem set or part of the solution
set.

Andrew.

Paul E. Bennett

unread,
Jul 24, 2010, 5:19:46 AM7/24/10
to
John Passaniti wrote:

> On Jul 23, 7:34 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
>> > Why are you against documentation.
>>
>> Documentation after the software is written is a good idea, and the
>> sooner the better.

I write no documentation after the software is written. It is much too late
by then as once the software is written and tested it is out the door and
the next project beckons.

>> Documentation *before* the software is written does more harm than
>> good.
>
> That depends entirely on what "documentation" means. Modern agile
> methodologies do document use cases and "user stories", capturing what
> the system should do. That kind of documentation is vital for
> projects to keep everyone focused on what they are supposed to be
> delivering, and they serve as a way to identify scope creep and
> requirements changes. Another kind of documentation that can be
> important to have are test plans. Architectural documentation is less
> important, since it can change, but even there, it can serve to
> capture the initial ideas behind a design, which can be useful later
> in post-mortem reviews of the project.

Think of the documentation as the plan. Once it is produced you are sticking
to the plan. This is why it is often more productive to concentrate on the
documentation first so that you can test those PHB's who like to see the new
stuff running 5 seconds after they hand you the task. In the end holding off
writing production software (or building the hardware), until you have all
the documentation in place, will benefit the quality and integrity of the
final product.

> I generally find most UML diagrams to be a waste and prefer to
> describe systems in terms of larger design patterns. But some UML
> diagrams (like statecharts) can very nicely capture complicated
> behavior visually.

Whatever style works for you. I like diagrams and models of all sorts (even
the cardboard cut-out models that I use for complex moving machinery).
Anything that gets you a fuller understanding of the full requirements is
good.



> Code documentation for most languages usually needs to be little more
> than a description since automatic documentation generators can
> extract information from function signatures. That isn't possible in
> Forth, and so stack effect diagrams are needed. Whenever possible,
> code documentation needs to be automated.

If you use a literal style of programming and include useful header comments
about the aims of each module, subroutine (word) then tools like DocGen can
extract a very useful set of information for the documentation package.



> How much any documentation matters also depends on how many people are
> working on the code. As the number of people increase, it becomes
> more important to keep everyone in sync. Often the people who think
> documentation is a waste of time are those who are "lone wolf"
> programmers who don't have to communicate the design. When these
> people start working in groups, they instantly see the value.

This occasional "Lone Wolf" has also worked for the larger companies. Even
in my lone wolf mode I always wrote the documentation first just to keep the
aims clear in my own mind. Then a lot of what I do has to get certified.

Albert van der Horst

unread,
Jul 24, 2010, 6:04:34 AM7/24/10
to
In article <19e21af4-7b91-4df9...@f33g2000yqe.googlegroups.com>,
Hugh Aguilar <hughag...@yahoo.com> wrote:
>On Jul 23, 4:02=A0pm, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>

(I hit the reply button before I realised it was you. Otherwise
I wouldn't have bothered.)

With some difficult problems where you don't even know what
you want in a design document, you start experimenting in code.
Even so, you must write down low level specs, or you get lost.
You must keep notes of blind alleys, or you get lost.
There are better programmers than me (i.e. in the sense they
can keep more in their head at the same time, the types with
chess-like talents), but eventually they get lost.

But why do I give you a benefit of the doubt, trying to find
situation where this might be true? As it stands it is nonsense.

Groetjes Albert


--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Paul Rubin

unread,
Jul 24, 2010, 5:38:20 AM7/24/10
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
> The core question is whether, for the programmer of an embedded
> device, all that stuff (the OS kernel, the language support, the
> sandboxing, etc) is part of the problem set or part of the solution
> set.

Sandboxing is a huge win for certain types of applications, though
admittedly not the ones Forth is usually directed at. I'm ok with
considering it a special-purpose feature.

What I meant about programming pleasantness was memory safety, garbage
collection, built-in support for nested structures, first-class
functions, OOP, etc.

Andrew Haley

unread,
Jul 24, 2010, 6:32:17 AM7/24/10
to

Yeah, I realize that. It's still an open question.

Andrew.

Bernd Paysan

unread,
Jul 24, 2010, 2:58:07 PM7/24/10
to
John Passaniti wrote:
> In companies
> where management is indeed only concerned with the bottom line, then
> it becomes the responsibility of the software developers to frame
> their suggestions in terms of the bottom line. If they don't, you
> can't blame management.

Maybe, but there still is the problem with believing what people say.
If I tell management that maintaining and fixing the worst problems of
the current solution (which already did cost nine months) takes three
month, and throwing it away and rewriting from scratch takes two weeks,
then the boss won't believe. Ok, I've already done that kind of things,
but the bosses change every year or so.

There are similar problems, e.g. when describing the amount of code I
need for something - to a new customer, who knows what the competition
needs for the function. The response shows that they simply don't
believe me, and explaining them that I a) use a processor with very
compact code, and b) keep my algorithm extremely simple, and do the pre-
processing outside of the embedded controller still causes raised
eyebrows.

Furthermore: The amount of time different people need for the same task
varys quite a lot in programming and similar tasks. In the current
project I'm working on, I went for a short holiday, and the program
manager assigned my tasks to two coworkers in the meantime. My
coworkers complained about the impossible schedule, and I told the
program manager that he can't simply assign my tasks to other peoples
just with the same time estimation. This is not only because I'm faster
than them, but because I also did some pre-planning to know what I'm
going to do when working out the initial plan.

And about the question of team size:

I'm fully aware that I can't make my chips without thousands of other
experts around the world. They develop all kinds of equipment, operate
fabs, create design kits and CAD software, but they are *not* part of my
team. That's why it works. They have their teams, their objectives,
and well-defined interfaces between them, most of them which I haven't
to be aware of at all. This is just the same as factoring a program.
If you don't factor 300 people, but put them into one big team, you get
the same bloat as if you don't factor 300 lines and write them into one
single function. And it's worse with people than with lines: If the
function really doesn't need 300 lines, I write it in 3 or 30 lines
(whatever it really takes), but when you start with 300 peoples, and
your project doesn't need them, you'll inevitably end up with bored
people who try to make themself somehow useful, and thereby create bloat
and block others.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Brad

unread,
Jul 24, 2010, 3:12:54 PM7/24/10
to
On Jul 22, 1:44 pm, Helmar <hel...@gmail.com> wrote:
> I do not think that Forth is broken by bad technology - it's broken by
> culture. It's impossible to find agreements, it's impossible for
> industry to allow idiosyncratic languages. The corsage by languages
> like C & friends is what is required by the decision makers.
> Everything flexible will be lost compared to a system that declares
> rules everyone can follow. This is why we have in politics
> dictatorship or democracy and not a system of "real" freedom.
>
I can think of some reasons Forth is not used.

1. As in any other profession, there are too many inept programmers.
They might not be the majority but they are enough to present a
virtual mine field to management. Better to limit the amount of damage
they can do, especially if things can blow back on you.

2. Most programmers I work with resist new ways of working. Sure,
Forth can be learned quickly but becoming a good Forth programmer is a
multi-year effort.

OTOH, I can think of some reasons Forth will continue to be used in
some applications.

1. Strategic business advantages matter. There will always be some
management willing to stick their neck out to improve the bottom line.

2. Resource constrained systems will be around for a long time.

-Brad

Elizabeth D Rather

unread,
Jul 24, 2010, 4:02:05 PM7/24/10
to
On 7/24/10 9:12 AM, Brad wrote:
...

> OTOH, I can think of some reasons Forth will continue to be used in
> some applications.
>
> 1. Strategic business advantages matter. There will always be some
> management willing to stick their neck out to improve the bottom line.
>
> 2. Resource constrained systems will be around for a long time.

Resource-constrained systems will be around *forever*, for the simple
reason that however much processors improve in power and speed, and
memory and other resources become cheaper and more plentiful, the
ambitions of designers and developers grow at an even faster pace, as
does the need to be earlier-to-market with lower unit costs. A language
and/or methodology that will reliably deliver faster development and
more efficient use of resources will always have a place.

Jerry Avins

unread,
Jul 24, 2010, 4:13:24 PM7/24/10
to
On 7/24/2010 4:02 PM, Elizabeth D Rather wrote:
> On 7/24/10 9:12 AM, Brad wrote:
> ...
>> OTOH, I can think of some reasons Forth will continue to be used in
>> some applications.
>>
>> 1. Strategic business advantages matter. There will always be some
>> management willing to stick their neck out to improve the bottom line.
>>
>> 2. Resource constrained systems will be around for a long time.
>
> Resource-constrained systems will be around *forever*, for the simple
> reason that however much processors improve in power and speed, and
> memory and other resources become cheaper and more plentiful, the
> ambitions of designers and developers grow at an even faster pace, as
> does the need to be earlier-to-market with lower unit costs. A language
> and/or methodology that will reliably deliver faster development and
> more efficient use of resources will always have a place.

Von Neumann's comment about Rajchman's proposed 32 Kword memory: "That's
great, Jan, but what would anyone ever use so much memory for?"

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Paul Rubin

unread,
Jul 24, 2010, 5:32:20 PM7/24/10
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>> What I meant about programming pleasantness was memory safety, garbage
>> collection, built-in support for nested structures, first-class
>> functions, OOP, etc.
> Yeah, I realize that. It's still an open question.

I can understand (and mostly share) your antipathy to Java, but you
might give Lisp a try if you haven't. I see it as a spiritual
fellow-traveller of Forth in some ways, so it's worth knowing both.
Lisp burns more memory than Forth does, but it eliminates a lot of
bookkeeping and debugging headaches, while giving even more flexibility
for customizing the language for the application's requirements.
Forth's strength is in very small systems (AVR, etc.) and in systems
with hard realtime requirements (small Lisps will usually need to pause
a few msec for garbage collection every so often). For bigger systems
(say 32k of ram or more), Lisp can make programming a lot easier while
sharing something like Forth's unconstrained spirit.

John Passaniti

unread,
Jul 24, 2010, 5:45:14 PM7/24/10
to
On Jul 24, 5:19 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
wrote:

> John Passaniti wrote:
> >> Documentation after the software is written is a good idea, and the
> >> sooner the better.
>
> I write no documentation after the software is written. It is
> much too late by then as once the software is written and
> tested it is out the door and the next project beckons.

You responded to Hugh's statement, not mine. I largely agree with
you, but in my experience, there is always a small amount of
retrospective documentation (mostly related to the design of the
software, quirks about the system learned during development, etc.).
This kind of documentation can only be written after the project, and
yes, it often will get put by the wayside as the next project looms.
This always upset me, and so when estimating projects, I always added
a "retrospective documentation" task. That way, it's budgeted for.

> Think of the documentation as the plan. Once it is produced you are
> sticking to the plan.

How well you can stick to a plan depends on how detailed the plan is.
As you work on safety-critical systems, I would imagine that your
plans are very detailed, and that if there is any deviation from those
plans, they have to go through some approval process. That's pretty
specialized work, and it's one of the few places where "big design up
front" is probably necessary. But that's not the kind of work I (and
I imagine most) developers do. Most of the time, you have people who
may be domain experts who can tell you what they want up front, but
will probably not see all the consequences of what they want. And as
development progresses, those consequences along with changes in the
market or new ideas may need to be incorporated. This is why the
"agile" methodologies are designed around practices that embrace
change, because in most projects, you are going to have change, and
it's not often an acceptable answer to say, "sorry, that's not in the
plan."

Again, agile methodologies *do* produce documentation, including
documentation prior to the project starting. It's impossible to come
up with a meaningful estimate on a project without experienced
developers sitting down, discussing the system, coming up with a rough
plan of attack, detailing a task list, and then estimating those
tasks. And certainly, documentation that comes out of that (diagrams,
patterns, task lists, task sequences, etc.) is valuable
documentation. But it's a guide, not the plan. The plan may need to
adapt to change.

I don't think we disagree, but I do think the kind of work we do is
different and that what we mean by documentation is very different.

> If you use a literal style of programming and include useful header
> comments about the aims of each module, subroutine (word) then tools
> like DocGen can extract a very useful set of information for the
> documentation package.

I'm not sure what you mean by a "literal style." But yes, putting
code documentation in the code itself is obviously a good thing.
Whenever code and it's documentation is separated, it *always* gets
out of sync. My point is that in most languages, the fact they
typically have a rigid syntax means that you can easily extract all
sorts of information from the code itself and use that as part of the
documentation. In Forth, the ability to create new syntax makes this
difficult. So you have to put a bit more effort in inserting such
documentation in Forth than you might in other languages. But
regardless, keeping it together is a good idea.

> This occasional "Lone Wolf" has also worked for the larger companies. Even
> in my lone wolf mode I always wrote the documentation first just to keep the
> aims clear in my own mind. Then a lot of what I do has to get certified.

Yep, and if you had worked instead for any of the companies I've
worked for, I imagine your head would explode as the project's
definition was subject to constant change.

John Passaniti

unread,
Jul 24, 2010, 9:24:40 PM7/24/10
to
On Jul 24, 2:58 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Maybe, but there still is the problem with believing what people say.  
> If I tell management that maintaining and fixing the worst problems of
> the current solution (which already did cost nine months) takes three
> month, and throwing it away and rewriting from scratch takes two weeks,
> then the boss won't believe.  

Let's get something out of the way first. I don't know you, and I
don't know your bosses. But I can say with complete confidence that
there are (and will always be) some bosses who are morons, who meet
every pointy-haired Dilbert or "Office Space" stereotype. They exist,
they will always exist, and your only option if you choose to continue
to work under them is to comply. So it isn't at all interesting to
talk about such bosses because there isn't much you can do about
them. The only purpose they seem to serve-- in comp.lang.forth-- is
for people to endlessly talk about them. And they do this for two
reasons-- to pretend that such bosses are the common case that
everyone has to deal with, and to have an anonymous punching bag for
their own failure. So now that we have that out of the way, maybe we
can have a useful discussion.

Here's the reality: Your bosses probably have to juggle budgets,
milestones, staffing, resources, and dozens of other constraints.
They have to make decisions based on what they know or can reasonably
predict. And depending on the type and size of company we're talking
about, a bad decision could mean anything from a slight dip on a
profit and loss statement to people being laid-off... or worse. So
when you trot up to your boss and say, "I can rewrite that system in
two weeks," they have to rationally and objectively evaluate your
claim and make a decision that will have consequences.

Part of that evaluation is easy: It's nothing special for practically
*anyone* to claim they can rewrite a system from scratch in less time
than it took to originally write it. When you rewrite a system from
scratch, you aren't really rewriting it from scratch-- you're using
perfect 20/20 hindsight, looking at someone else's work and being able
to pick and choose what did and didn't work. You'll also probably
benefit from the fact that the system is probably better defined than
when the original developers started on it. If we're talking about an
embedded system, you also benefit from the hardware probably being
debugged; you likely don't have to start from zero bringing up first-
turn boards. And even if your rewrite shares zero lines of code with
the original system, you still have the opportunity to evaluate the
overall design and approach they took. So you're starting well ahead
of where the original developers of the system started. I'm facing
this at work right now-- I've inherited a codebase that has evolved
over several years, and has an enormous amount of cruft and bad
design. OF COURSE I can rewrite that code in less time than it took
to create it. Who couldn't when you have all those advantages.

So the question your boss had to answer wasn't *if* you could rewrite
the code faster than the original nine months, because of course you
could. The question is if you could meet your two week estimate. If
you can, great, everyone wins. If you can't-- if you were wrong--
then there are consequences for your poor estimation that the boss is
likely to face. This is the essence of game theory-- making strategic
judgments that are in part dependent on the choices others make. And
in this case, you've presented your boss with two mutually-exclusive
choices; spend three months to fix the problem in the current code, or
spend two weeks to rewrite the code from scratch. You can't do both
simultaneously, and the boss has to evaluate if he believes you will
be successful and assign a cost if you aren't. You merely saying you
can do it doesn't mean you can. Are you going to tell me that every
estimate you've ever provided has been dead on?

So the responsibility falls on you-- not your boss-- to prove your
claim...

> Ok, I've already done that kind of things,
> but the bosses change every year or so.

So? Even if your boss didn't change, is every project you do so
similar to the last that you can say your past performance necessarily
will translate to the next? If that kind of repetition describes your
career, great, that's something you can bring up with your boss and
say, "I have a proven track record that I can do this and I can point
to these past similar projects that demonstrate that claim." But if
this project is not similar to the ones you'll cite, and if it
presents any new challenges that you can't demonstrate experience
with, then it's completely reasonable for your boss to question your
estimate.

It's also reasonable for your boss to question the quality of your
estimate if it isn't documented with the details on exactly *how* you
will do what you claim. For an estimate of two weeks to rewrite a
system, I know that if I was your boss, I would *minimally* ask for
all of the following:

1. A clear statement of your approach.
2. A clear statement of what was wrong with the original design.
3. A rough task list, detailing the work you will do.
4. A set of milestones for the next two weeks.

Item #1 is obvious-- if you can't express your plan clearly and
coherently, then why should I believe you? Item #2 is less obvious--
you better be able to explain what went wrong with the original
design, because if you can't, then why should I believe you won't make
the same mistakes? Item #3 proves to me that you've done more than
lick your finger, stick it in the air, and come up with a two week
estimate from nothing. And finally, item #4 gives me the objective
means to get early warning if you aren't going to make your estimate.
That is, I'm not going to let you go off on a private Jazz Odyssey and
then two weeks later say "here it is" or "sorry." You're going to set
up small milestones and as you meet them, you're going to tell me. If
you are successful, great. If not, then I get early warning that you
may not meet your estimate, and I can then move to some other strategy
to mitigate it.

If you didn't provide all this to your boss as part of your claim that
you can do it in two weeks, then you're being unreasonable.

> There are similar problems, e.g. when describing the amount of code I
> need for something - to a new customer, who knows what the competition
> needs for the function.  The response shows that they simply don't
> believe me, and explaining them that I a) use a processor with very
> compact code, and b) keep my algorithm extremely simple, and do the pre-
> processing outside of the embedded controller still causes raised
> eyebrows.

It should. I have no idea what documentation you provide to support
those claims, but I would want to see:

1. A size benchmark of algorithms relevant to the problem domain
showing how much more compact the code is.
2. A clear statement of the simpler algorithm you'll be writing.
3. A clear statement of exact preprocessing load done outside the
embedded controller.

You should see a pattern here. You have a responsibility to prove
your claims. If you don't, then you can't blame your boss.

> Furthermore: The amount of time different people need for the same task
> varys quite a lot in programming and similar tasks.  In the current
> project I'm working on, I went for a short holiday, and the program
> manager assigned my tasks to two coworkers in the meantime.  My
> coworkers complained about the impossible schedule, and I told the
> program manager that he can't simply assign my tasks to other peoples
> just with the same time estimation.  This is not only because I'm faster
> than them, but because I also did some pre-planning to know what I'm
> going to do when working out the initial plan.

I'm not sure what the point of this is. Estimates should only apply
to the people who make them. That's fundamental to every development
methodology, formal or informal, that I've ever heard of. If the
point is your boss rejected your two week claim because he benchmarked
you against slower developers, then that again is reasonable. Where I
work now, I'm a slower developer than my predecessor. I'm also a lot
more careful, and where he would hack things together and get them to
sorta kinda work, my designs are a lot more deliberate and focus on
being right by design. Indeed, many of the bugs I've addressed over
the past couple months are because my predecessor thought it was more
important to get out code quickly than to make sure it would work. So
telling me you're faster than your peers really doesn't tell me much.
One would hope you balance speed with quality, but as a boss trying to
evaluate your "I can do that in two weeks" claim, that is yet another
one of the concerns I have to think about before giving you the go-
ahead.

> And about the question of team size:
>
> I'm fully aware that I can't make my chips without thousands of other
> experts around the world.  They develop all kinds of equipment, operate
> fabs, create design kits and CAD software, but they are *not* part of my
> team.  That's why it works.  They have their teams, their objectives,
> and well-defined interfaces between them, most of them which I haven't
> to be aware of at all.  This is just the same as factoring a program.  
> If you don't factor 300 people, but put them into one big team, you get
> the same bloat as if you don't factor 300 lines and write them into one
> single function.  And it's worse with people than with lines: If the
> function really doesn't need 300 lines, I write it in 3 or 30 lines
> (whatever it really takes), but when you start with 300 peoples, and
> your project doesn't need them, you'll inevitably end up with bored
> people who try to make themself somehow useful, and thereby create bloat
> and block others.

Sorry, I don't agree. I can certainly see how it can happen in
projects that are mismanaged or that have poor communication. But the
ultimate cause of bloat isn't merely the number of people working on a
project. People *want* to believe this because it's a simple answer
to a complex problem. And people love simple answers. It saves time
and effort in actually thinking about the real problems.

Personally, I'm not terribly interested in projects that have hundreds
of programmers. Those projects are exceptions to the experience that
the vast majority of developers will face, and it isn't at all clear
to me what lessons one can learn from exceptions to the norm. And
just like the canonical meat-head boss stereotype that people in
comp.lang.forth love to trot out, these stories about huge projects
with hundreds of developers on them don't seem to be terribly
relevant, unless the claim being made is that they too are the norm.

I'm willing to bet that if we were to take a representative sampling
of software developers and create a histogram of the number of
developers they work with on their projects, that the number of people
working in teams of 300 or more are a *tiny* minority. And as a tiny
minority, I fearlessly say that drawing conclusions about such a
population is likely to be suspect.

Paul E. Bennett

unread,
Jul 25, 2010, 4:35:48 AM7/25/10
to
John Passaniti wrote:

[%X]

> How well you can stick to a plan depends on how detailed the plan is.
> As you work on safety-critical systems, I would imagine that your
> plans are very detailed, and that if there is any deviation from those
> plans, they have to go through some approval process.

The changes to plan (as distinct from the changes that correct the errors
made in compliance to the plan) will most times attract additional costs
from the client (ie: if he suddenly decides that he needs an extra function
or wants to lose a function then there are costs associated with that change
of plan). Changes that correct the developed product have to be swallowed.
In most of my line or work it is normal to make the extent of supply a
formal part of the overall contract.

> Most of the time, you have people who
> may be domain experts who can tell you what they want up front, but
> will probably not see all the consequences of what they want.

So the main problems are the quality of information gathering at the start
of the project.

> And as
> development progresses, those consequences along with changes in the
> market or new ideas may need to be incorporated. This is why the
> "agile" methodologies are designed around practices that embrace
> change, because in most projects, you are going to have change, and
> it's not often an acceptable answer to say, "sorry, that's not in the
> plan."
>
> Again, agile methodologies *do* produce documentation, including
> documentation prior to the project starting. It's impossible to come
> up with a meaningful estimate on a project without experienced
> developers sitting down, discussing the system, coming up with a rough
> plan of attack, detailing a task list, and then estimating those
> tasks. And certainly, documentation that comes out of that (diagrams,
> patterns, task lists, task sequences, etc.) is valuable
> documentation. But it's a guide, not the plan. The plan may need to
> adapt to change.


Plans do change. In my industries changing plans requires effort in gaining
approval for the changes, leading to re-approval of the product and its
certification. The process I use is quite agile and is geared towards coping
with changes that will happen, and it assists with the generation of the
fully traceable audit trail that is required of my industries.



> I don't think we disagree, but I do think the kind of work we do is
> different and that what we mean by documentation is very different.

I think we are both in agreement on the need for full and comprehensive
documentation. However, when you say ".. but in my experience, there is

always a small amount of retrospective documentation (mostly related to the
design of the software, quirks about the system learned during development,

etc.)" I would say that these quirky aspects should have been discovered
earlier than the final product test.



>> If you use a literal style of programming and include useful header
>> comments about the aims of each module, subroutine (word) then tools
>> like DocGen can extract a very useful set of information for the
>> documentation package.

The "literal style" I was referring to is indeed the inclusive documentation
in source files. As I use Forth for the most safety critical systems I tend
to write the requirements for a word first (in imperative Glossary terms).



>> This occasional "Lone Wolf" has also worked for the larger companies.
>> Even in my lone wolf mode I always wrote the documentation first just to
>> keep the aims clear in my own mind. Then a lot of what I do has to get
>> certified.
>
> Yep, and if you had worked instead for any of the companies I've
> worked for, I imagine your head would explode as the project's
> definition was subject to constant change.

It would more likely we would be heading for a contract review meeting to
work out the cost impact of the changes and what extra would money would
need to be charged for implementing the change. My head does not explode
that easily.

In general John, I think we have rather a lot we do agree on.

ken...@cix.compulink.co.uk

unread,
Jul 25, 2010, 4:43:06 AM7/25/10
to
In article
<e982890e-f14e-4feb...@u26g2000yqu.googlegroups.com>,
john.pa...@gmail.com (John Passaniti) wrote:

> Sometimes. And sometimes those documents matter.

They certainly matter in the case of software that is being used by
more than one company. An example of this is the ticket issuing software
being used by British railway companies. Due to the way the revenue is
sliced and diced the software both for fixed and portable machines has
to be written by a central source and accommodate issues for specific
companies.

Ken Young

Paul E. Bennett

unread,
Jul 25, 2010, 4:57:39 AM7/25/10
to
Albert van der Horst wrote:

> In article
> <19e21af4-7b91-4df9...@f33g2000yqe.googlegroups.com>,
> Hugh Aguilar <hughag...@yahoo.com> wrote:
>>On Jul 23, 4:02=A0pm, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
>>wrote:
>>> Why are you against documentation.
>>
>>Documentation after the software is written is a good idea, and the
>>sooner the better.
>>
>>Documentation *before* the software is written does more harm than
>>good.
>
> (I hit the reply button before I realised it was you. Otherwise
> I wouldn't have bothered.)

I don't think I should feel hurt by those sentiments. ;>



> With some difficult problems where you don't even know what
> you want in a design document, you start experimenting in code.
> Even so, you must write down low level specs, or you get lost.
> You must keep notes of blind alleys, or you get lost.
> There are better programmers than me (i.e. in the sense they
> can keep more in their head at the same time, the types with
> chess-like talents), but eventually they get lost.

With those difficult problems it is OK to play a bit and build prototype
code to explore the problem domain. I often play for a while (in multiple
media) when trying to resolve what the real problems areas are going to be.
Yes you will document the methods tried, the results and problems you
encountered. Just remember to throw that prototype into a dark hole
somewhere and not to use it in the final product. Develop the code for the
final product from the documentation you generated by playing. It will be
far better quality for doing so.

> But why do I give you a benefit of the doubt, trying to find
> situation where this might be true? As it stands it is nonsense.

It has been the subject, of psychological research, and the conclusions I
have seen is that most people can hold at maximum just seven distinct ideas
in their head at one time. It can also be difficult to carry the same seven
notions for a long period of time. So, documentation is a way to preserve
those notions long term.

Paul E. Bennett

unread,
Jul 25, 2010, 5:36:32 AM7/25/10
to
John Passaniti wrote:

> On Jul 24, 2:58 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
>> Maybe, but there still is the problem with believing what people say.
>> If I tell management that maintaining and fixing the worst problems of
>> the current solution (which already did cost nine months) takes three
>> month, and throwing it away and rewriting from scratch takes two weeks,
>> then the boss won't believe.

[%X ---- script about the responsibility of managers ----- [%X]

> So the responsibility falls on you-- not your boss-- to prove your
> claim...

Claims need evidence. When you go to your boss with the claim of being able
to re-write the whole thing again in two weeks you should really be able to
back that claim with evidence of your capability. Do you have an accurate
count of the number of function points involved in the software? Do you know
how many lines of code you are going to produce? Do you know how much effort
you will need to debug the code you produce? There are a whole raft full of
questions that your boss will need answers to in order to have enough
confidence in your claims.

[%X -- the evaluation of required evidence I agree with John about ---%X]

> I'm willing to bet that if we were to take a representative sampling
> of software developers and create a histogram of the number of
> developers they work with on their projects, that the number of people
> working in teams of 300 or more are a *tiny* minority. And as a tiny
> minority, I fearlessly say that drawing conclusions about such a
> population is likely to be suspect.

What constitutes a team?

I am currently working on a research project that employs about 1000 people.
It is claimed to be a team.

Within the project there are a number of teams working on different aspects
of the overall project.

Within those teams there are other teams that do things like physics,
mechanical, electrical, electronics and software.

You have to be clear where your team boundaries are, what their
responsibilities are and how they contribute to the overall hierarchy of the
project. Teams will generally have a mix of personalities, capabilities and
disciplines. Managers should understand their team's dynamics in order to
manage them effectively. Probably what the PHB's fail to do.

Andrew Haley

unread,
Jul 25, 2010, 6:51:16 AM7/25/10
to
Paul Rubin <no.e...@nospam.invalid> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>>> What I meant about programming pleasantness was memory safety, garbage
>>> collection, built-in support for nested structures, first-class
>>> functions, OOP, etc.
>> Yeah, I realize that. It's still an open question.
>
> I can understand (and mostly share) your antipathy to Java, but you
> might give Lisp a try if you haven't.

I have no specific antipathy to Java: I'm saying that there are
pleasures to be had from both complex and simple systems. Complexity
has its disadvantages, and you have to weigh that against the
convenience of some language features. Simple systems are not just
about speed and resource constraints.

> I see it as a spiritual fellow-traveller of Forth in some ways, so
> it's worth knowing both.

I can see that, especially for the simpler LISPs. From what I've seen
of Common LISP I'm not sure it really applies there, though.

> Lisp burns more memory than Forth does, but it eliminates a lot of
> bookkeeping and debugging headaches, while giving even more
> flexibility for customizing the language for the application's
> requirements. Forth's strength is in very small systems (AVR, etc.)
> and in systems with hard realtime requirements (small Lisps will
> usually need to pause a few msec for garbage collection every so
> often). For bigger systems (say 32k of ram or more), Lisp can make
> programming a lot easier while sharing something like Forth's
> unconstrained spirit.

Well, that's an interesting pronouncement, but (please forgive me if
I'm wrong) you've given the impression that you don't know Forth at
all well.

Andrew.

The Beez'

unread,
Jul 25, 2010, 9:23:18 AM7/25/10
to
On Jul 22, 10:44 pm, Helmar <hel...@gmail.com> wrote:
Nonsence. Forth is broken by three issues and three issues only:
(a) Forth requires you to balance your stack. Since most programmers
are unable to balance their malloc()/free() calls and consequently
need garbage collection, Forth is not bound to be popular. In short,
cause 1: the lack of skilled programmers.

(b) Forth has traditionally a high threshold because of the lack of
documentation. Since Forth is only used by very skilled programmers
they don't see why a skilled programmer would need documentation. The
code is clear enough. In short, cause 2: you either desperately WANT
to learn Forth or you give up because they is no easy way to pick it
up.

(c) Forth has long had a host of different Forth compilers and an even
wider host of standards. This required you to write the same program a
zillion times, a thing that Forth's creator even promotes. Rewriting a
program is no fun. It's much easier to write bloatware by including a
zillion external libraries of different quality, so your program can
fail in so many different and creative ways. ANS-Forth and
multimegabyte Forth compilers have fixed this issue for the most part.
In short, cause 3: lazy programmers

That's why Forth is not popular. And that's why we're discussing this
issue again and again every few years. Since it is a very fundamental
issue nothing changes and consequently discussing this issue has been
part of the Forth culture since the very beginning.

But see it from the bright side: Forth will always have its elite
corps of productive, skilled programmers who really WANT to program in
Forth. For some odd reason the Navy Seals never ask themselves why
there are so few Navy Seals.

Hans Bezemer

Paul Rubin

unread,
Jul 25, 2010, 11:19:37 AM7/25/10
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>> I see it as a spiritual fellow-traveller of Forth in some ways, so
>> it's worth knowing both.
>
> I can see that, especially for the simpler LISPs. From what I've seen
> of Common LISP I'm not sure it really applies there, though.

I was thinking of smaller lisps but now that you mention it, I think
it's also true (in a different way) of Common Lisp. Common Lisp is a
complex and crufty language, so certainly "simplicity" as a shared
element with Forth goes out the window, but I'd say the cultures of both
languages share a disdain for bureaucracy that's reflected in the
technical choices in them.

>> For bigger systems (say 32k of ram or more), Lisp can make

>> programming a lot easier...


> Well, that's an interesting pronouncement, but (please forgive me if
> I'm wrong) you've given the impression that you don't know Forth at
> all well.

True, I've been reading about it and messing with it a little, and I
wrote a toy implementation in Python a while back, but I haven't done
anything nontrivial with it directly. I'm going partly on the basis of
Forth code that I've seen posted here and elsewhere, partly from having
written enough C and assembly code over the years to appreciate the
benefits of automatic storage management, checked pointers, etc.

Bernd Paysan

unread,
Jul 25, 2010, 1:48:37 PM7/25/10
to
The Beez' wrote:
> (c) Forth has long had a host of different Forth compilers and an even
> wider host of standards. This required you to write the same program a
> zillion times, a thing that Forth's creator even promotes.

Well, that's said, but is that true? If you cut off the long tail of
unpopular systems, how many popular Forth compilers are there? How does
that compare to popular C compilers? A lot of people use GCC and a
Unix-like OS (mostly Linux) in the C world, and another lot uses MSVC
and Windows, that's the two most popular ones, and if you have seen one
of them, you have seen one of them, just like with a Forth system.

How many Forth standards are there, apart from the obsolete ones?
There's ANS and the successor, Forth200x, that's all.

Bernd Paysan

unread,
Jul 25, 2010, 2:01:02 PM7/25/10
to
John Passaniti wrote:
> Part of that evaluation is easy: It's nothing special for practically
> *anyone* to claim they can rewrite a system from scratch in less time
> than it took to originally write it. When you rewrite a system from
> scratch, you aren't really rewriting it from scratch-- you're using
> perfect 20/20 hindsight, looking at someone else's work and being able
> to pick and choose what did and didn't work.

In the case of the example I gave, there was no such 20/20 hindsight.
The original code was completely crap, no single line survived. It
originated at an university, was "improved" by two coworkers who left
the company instead of succeeding, and I got it barely to work.

> So the question your boss had to answer wasn't *if* you could rewrite
> the code faster than the original nine months, because of course you
> could. The question is if you could meet your two week estimate.

The solution was fairly trivial: I just did it. And *then* I told the
boss that it took me two weeks to do it, while he spoiled two
"resources" and wasted a lot of budget (needless to say, so I didn't say
it). I remember some teeth gnashing, but I got a huge raise shortly
afterwards, and no further questions about what I'm going to do.

> If
> you can, great, everyone wins. If you can't-- if you were wrong--
> then there are consequences for your poor estimation that the boss is
> likely to face.

When I fail, I wasted two weeks. When I don't try, I will waste three
months on the broken code. If you have any grasp of game theory, even a
10% chance of me succeeding is worth a try.

> So the responsibility falls on you-- not your boss-- to prove your
> claim...

The proof of the pudding is its eating.

> I know that if I was your boss, I would *minimally* ask for
> all of the following:
>
> 1. A clear statement of your approach.
> 2. A clear statement of what was wrong with the original design.
> 3. A rough task list, detailing the work you will do.
> 4. A set of milestones for the next two weeks.

You seem to grow pointy hairs, as well. Under your command, I expand
the estimation to 6 weeks. This is

2 weeks to do what I need to do.
2 weeks to write up the stuff you want me to, as a 20/20 hindsight of
the first two weeks.
2 final weeks which are just scheduled because you reversed actio and
reactio.

> It should. I have no idea what documentation you provide to support
> those claims, but I would want to see:
>
> 1. A size benchmark of algorithms relevant to the problem domain
> showing how much more compact the code is.
> 2. A clear statement of the simpler algorithm you'll be writing.
> 3. A clear statement of exact preprocessing load done outside the
> embedded controller.
>
> You should see a pattern here. You have a responsibility to prove
> your claims. If you don't, then you can't blame your boss.

All of those was presented to the customer; as it was a one-hour
presentation, it was not very detailed. If you make outrageous claims,
they will raise eyebrows, even when you can fully support them. Sorry,
that's just normal psychology.

Elizabeth D Rather

unread,
Jul 25, 2010, 2:28:06 PM7/25/10
to
On 7/25/10 7:48 AM, Bernd Paysan wrote:
> The Beez' wrote:
>> (c) Forth has long had a host of different Forth compilers and an even
>> wider host of standards. This required you to write the same program a
>> zillion times, a thing that Forth's creator even promotes.
>
> Well, that's said, but is that true? If you cut off the long tail of
> unpopular systems, how many popular Forth compilers are there? How does
> that compare to popular C compilers? A lot of people use GCC and a
> Unix-like OS (mostly Linux) in the C world, and another lot uses MSVC
> and Windows, that's the two most popular ones, and if you have seen one
> of them, you have seen one of them, just like with a Forth system.
>
> How many Forth standards are there, apart from the obsolete ones?
> There's ANS and the successor, Forth200x, that's all.


Adding to these accurate and appropriate remarks, Chuck mainly promotes
writing a program about three times, not a zillion, and he's hardly
alone in this. That third pass produces a good solution. As time
passes, further mods are necessary for enhancements and changing
requirements, but that's a reality of software development, independent
of language.

Krishna Myneni

unread,
Jul 25, 2010, 5:46:48 PM7/25/10
to
On Jul 25, 10:19 am, Paul Rubin <no.em...@nospam.invalid> wrote:


The notion that Forth is only really useful on small systems, and
maybe only for very small applications, isn't true for me. Regardless
of how another language might approach a particular programming
problem, I have used Forth for some time now to develop and use
applications built from thousands of lines of Forth code. Such
applications can of course be done in C/C++, but would then be
inflexible and require a command interpreter to be written to be
usable. For example, I may have to interactively tune a
proportionality constant in a PI control loop. Now, it's a simple
matter of interactively changing the value of a variable from the
Forth prompt. There's no need to write a lot of meaningless user
interface code, which would consume hundreds or possibly thousands of
lines of code. Of course, these apps are not commercial products, so I
can be quite cavalier with respect to the user interface.
Nevertheless, it has always been nearly a trivial matter to train
others to use the code, even when the user is not aware of the
underlying Forth system. It has also not been a problem for others to
extend and customize the code for specific applications. Also, since
the particular application I'm considering involves data acquisition,
requiring MB of memory, and hundreds of MB of disk storage, it is
certainly not one that would fit into a small system.

Granted that other interactive systems such as Lisp also provide
interactivity, thereby removing the need for much of the user
interface code, one comparison between the languages is simplicity of
the syntax for user commands. Forth allows great flexibility in the
syntax --- we aren't restricted to simply postfix notation. Some
systems developed under Lisp, e.g. the symbolic math program Maxima,
provide interactivity through a syntax that is not Lisp-like. In the
case of Maxima, access to the underlying Lisp interpreter is provided,
but not directly.
I don't claim that Forth is a better development tool for applications
such as computer algebra systems. In fact, Lisp is likely the better
choice here. However, I believe the notions that Forth is only for
small systems and only for very small applications, are not generally
valid --- the application is the important consideration.

Krishna

Jerry Avins

unread,
Jul 25, 2010, 6:19:27 PM7/25/10
to

I don't know Bernd either, but I know his circumstance. I worked in a
job shop within RCA Labs that I had been instrumental in establishing.
We provided other groups, mostly within the Labs but also in product
divisions, with instrumentation not commercially available. One project
leader requested a controller for a room-size assembly machine for
video-disc caddies. I suggested doing the job with an already functional
Z-80 system using Forth. The project leader believed that neither the
language nor the processor was sophisticated enough to do the job, and
enlisted a team of two C programmers to build an 8086 system using Intel
boards. When our meeting ended, it was clear to me that the project was
in trouble. I set about doing the job anyway, about half time. A month
later, I offered to test my controller on the nearly finished machine
and was brushed off. About three weeks after that, their controller was
connected with disastrous results. It took about a week for the damage
to be repaired and software to be rewritten, and they tried again.
Another two weeks of debugging (two or three recompilations a day)
didn't bring success. Finally, our mutual Director suggested that they
try my offering. It took about an hour to change over the cabling. I
think that I was as surprised as anyone when it worked perfectly on the
first try.

Maybe I oughtn't have been surprised. The hardware was fully functional
before I began programming and had been used for a few other projects. I
has written (in the case of the monitor ROM and PolyForth, rewritten)
most of the code myself. The device drivers had all been used before.
The algorithms were simple. The machine's specifications were clear, and
I had kept my code current with changes as the project progressed.

The original controller was never reconnected. But I agree with you,
John, that it's not a good way to work.

Jerry
--
Engineering is the art of making what you want from things you can get.

ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

John Passaniti

unread,
Jul 26, 2010, 2:01:08 AM7/26/10
to
On Jul 25, 4:35 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
wrote:

> > Most of the time, you have people who
> > may be domain experts who can tell you what they want up front, but
> > will probably not see all the consequences of what they want.  
>
> So the main problems are the quality of information gathering at
> the start of the project.

No, not really. In the industry I'm in (networked digital audio and
related signal processing, primarily for sound reinforcement
applications), a project may run anywhere from a couple months to a
year. The fine people in sales and marketing put due diligence in
analyzing a product's needs and presenting to us what they want. But
things change. It could be the competition coming out with a new
killer feature. It could be a trade-show where at a pre-release
demonstration, a distributer says, "if you added this feature, I could
sell more." It could be a discovery or insight we make during
development that wasn't obvious when we started. The point is that at
least in our industry, it simply isn't possible to completely define
every aspect of the product and then proceed along a carefully planned
development path to that end. And if you try, you risk being at the
trailing end of the curve and you won't sell product. It's easy to
say it's just a matter of the quality of information gathering, but
short of hiring people with precognitive powers or investing in
industrial espionage, it's not reasonable.

There is also the fact that the systems we create don't often stand
alone. They are usually part of a larger system and have to
interoperate both with bleeding-edge technology and ancient. Here's a
typical signal path:

A musician plugs their guitar into their favorite wah-wah pedal,
which is plugged into a tube amplifier with a spring reverb,
which feeds a speaker cabinet to further color the sound,
which is picked up by a microphone in front of the cabinet,
which along with the signal feeding the amp is sent to a stage mixer,
which feeds a mix to the in-ear monitors of the drummer and also,
which is converted from analog to digital,
which is transported over Ethernet using a protocol,
which feeds a switch, aggregating multiple audio and control streams,
which is sent over a network (copper, fiber, free-space optical
laser),
which is received at the front-of-house station,
which is deaggregated into individual audio and control channels,
which feeds a mixing console,
which feeds outboard signal processing gear (both analog and digital),
which goes back into the mixing console,
which often is sent to a multichannel recorder and also,
which feeds the auditorium's "house" sound system,
which does more signal processing (EQ, limiting, feedback
suppression),
which is then is processed for speakers (crossover, time alignment),
which then is sent to the amplifiers,
which feed the speaker arrays,
which the audience hears and they react to,
which is picked up by microphones,
which is sent back to the mixing console,
which is sent to the multichannel recorder.

And that's just the audio.

That's the industry I'm in, and I've worked on products that cover
that entire range. It's an industry where the old joke about "the
nice thing about standards is that there are so many of them" isn't a
joke. And depending on where the product fits in the signal chain,
you may have to deal with a a dozen complex analog and digital
standards, some of which are ad hoc, others which are constantly
evolving. Get all 2010 and toss in video streams, stage lighting and
mechanical control protocols, and you have a soup of analog and
digital signals flying around. And even though a particular standard
may precisely dictate electrical and communications protocols, the
fact is that if an industry leader comes out with a popular product
that deviates from those standards, you still have to interoperate
with them. At least if you have any hope of selling your product.

So change isn't just a function of people not defining the product
adequately. Change also happens because depending on where you are in
the signal processing and control chain, you may have to deal with
real standards, quasi-standards, and ad hockery both before and after
your product.

> Plans do change. In my industries changing plans requires effort in
> gaining approval for the changes, leading to re-approval of the product
> and its certification. The process I use is quite agile and is geared
> towards coping with changes that will happen, and it assists with the
> generation of the fully traceable audit trail that is required of my
> industries.

I think we're using "change" and "agile" in different ways.

I don't work in safety-critical systems, but can imagine that such
products (aircraft instrumentation, medical systems, railway
signaling, etc.) usually only change in small, very controlled ways.
It is probably extremely rare that someone comes up with "game
changing" features during development, and if they do, it's probably
going to show up in a different product. From what I know about the
stringent approval processes for safety-critical systems, a major new
feature is likely to push out a product's delivery date (and revenue
from it) by months. So I'm guessing that "change" in your case is
quite limited, because if it wasn't, you would never ship product.

In my industry, "change" can happen at any time, and can wildly affect
the product. One of my current projects is a multichannel remote
control. There is some base functionality in the product that sales
and marketing agrees on that I'll be implementing. But they view it
more as a prototype, and they'll be giving it to some beta testers to
get feedback on what else it could usefully control. Right now, it
has bi-color LEDs on each channel, so I can light up each channel red,
green, or orange. But who knows-- they might come back and say they
need want a tri-color LED in order to display more states. That will
require a hardware change. Or they might require that it communicates
with more than one device over Ethernet. That might drive the need
for more memory for Ethernet communications. From a software
perspective, you deal with this kind of change by having a flexible
design from the front-- one where there are clear interfaces between
modules that can be replaced as needed.

You seem to be using agile in the traditional English sense. I'm
using it more as a placeholder for a set of common development
practices. The agile methodologies are things like Extreme
Programming, Scrum, and others, and cover a set of practices ranging
from Test Driven Development, to Continuous Integration, to
refactoring. You can read more about it at
http://en.wikipedia.org/wiki/Agile_software_development

> > I don't think we disagree, but I do think the kind of work we do is
> > different and that what we mean by documentation is very different.
>
> I think we are both in agreement on the need for full and comprehensive
> documentation. However, when you say ".. but in my experience, there is
> always a small amount of retrospective documentation (mostly related to the
> design of the software, quirks about the system learned during development,
> etc.)" I would say that these quirky aspects should have been discovered
> earlier than the final product test.

By "quirk" I mean something that doesn't necessarily affect the
product's functionality, but which is surprising or which could waste
someone else's time if you didn't document it. For example, in older
units, if you hooked up a remote control it would "just work". Now,
for a variety of reasons, when you connect a remote control, it won't
do anything until you explicitly assign a function to the remote.
This is a quirk in the sense that a user's expectation about how the
system would operate, based on past products, is wrong. And I can
tell them to RTFM where they will learn this... on page 23. But they
won't read the manual. But they will read a bright pink card that
says "ATTENTION" on it and describes the quirk.

There are also quirks outside of our control. Most of our systems are
controlled over Ethernet with a Windows-based application. We support
four flavors of Windows-- 200x, XP, Vista, and 7. There are
differences between these versions of Windows, and those differences
(along with the myriad of ways end users can configure their systems)
can lead to flaky communications. So while we faithfully follow the
relevant standards, Microsoft and other companies who get between our
devices and our application software can (and do) their own thing.
And some of these quirks only showed up after end users got the
product into their racks and had problems communicating with them.

And finally, we produce devices allow the end user to construct
arbitrary DSP functions on an audio channel. I've never calculated
the total possible number of useful combinations possible, but it's an
insanely large number. There is no way we could exhaustively test
every single possible combination. So when I discovered during my
testing that a particular set of extreme settings on a feedback
suppressor, compressor, and parametric EQ could cause a weird effect
on the audio, I documented that as a quirk.

> It would more likely we would be heading for a contract review meeting to
> work out the cost impact of the changes and what extra would money would
> need to be charged for implementing the change. My head does not explode
> that easily.

Maybe, but I once worked with a fellow who was a really good engineer,
but who came from a background of military and satellite
communications. In his past employment, it was critical to detail
every last aspect of a system before you started on anything, and he
would have an impressive paper trail for everything he did. When he
started work with us, his answer to how to deal with the constant
change we were facing was to go back to square one and redocument the
system, again creating an impressive paper trail. In the industry he
came from, this was valuable. In our industry, it has virtually no
value. He eventually left, unable to cope with the rate of change.

It's not all chaos. We don't start with a toaster and end up with a
lawn mower. But in such an environment, you naturally end up with
development strategies that embrace reasonable amounts of change.

John Passaniti

unread,
Jul 26, 2010, 2:29:26 AM7/26/10
to
On Jul 25, 5:36 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
wrote:

> > So the responsibility falls on you-- not your boss-- to prove your
> > claim...
>
> Claims need evidence. When you go to your boss with the claim of being able
> to re-write the whole thing again in two weeks you should really be able to
> back that claim with evidence of your capability. Do you have an accurate
> count of the number of function points involved in the software? Do you know
> how many lines of code you are going to produce? Do you know how much effort
> you will need to debug the code you produce? There are a whole raft full of
> questions that your boss will need answers to in order to have enough
> confidence in your claims.

Yes, that is more detail to what I wrote. But really, being able to
support one's claims is a basic job function every software developer
needs to do, regardless of the length of a project. It doesn't matter
if a project is two weeks or two years, coming up with accurate
estimates is fundamental. And it's not an easy skill, usually because
in the effort to abstract the problem, many developers don't account
for every part.

Where I last worked, we would do a gross task breakdown and then
continue to refine it. For each task, we would minimally break it
down into three parts-- design, implementation, and test. We would
also toss in an estimate of risk. While not perfect (what is?), the
intent was to force the developer to think about the various aspects
of each task. On top of this, we would put certain rules. For
example, if a single task was large (like 40 hours or more), that is a
potential warning sign that the task itself might not be broken down
enough for an descent estimate. We would also pass the estimates
around and if someone thought an estimate was too small or too large,
we would discuss it.

To those who don't have to come up with accurate estimates for their
work, it may sound like a lot of busy work, but in serves multiple
purposes. It forces developers to seriously think about their design
and not pull numbers out of thin air. And because we were often in
competition with others for the same contract, it prevented developers
adding unjustifiable padding. And finally, as we usually had a 10%
guarantee (where if we went 10% over our estimate, we would eat the
development costs), it puts into sharp focus the need to make
estimates that can be justified.

> [%X -- the evaluation of required evidence I agree with John about ---%X]
>
> > I'm willing to bet that if we were to take a representative sampling
> > of software developers and create a histogram of the number of
> > developers they work with on their projects, that the number of people
> > working in teams of 300 or more are a *tiny* minority.  And as a tiny
> > minority, I fearlessly say that drawing conclusions about such a
> > population is likely to be suspect.
>
> What constitutes a team?
>
> I am currently working on a research project that employs about 1000 people.
> It is claimed to be a team.

I'm speaking specifically about software development projects. What
was presented in this thread was a 300-person team that were all
*directly* contributing to the software. And I'm specifically
bringing this up because it is a constant theme in comp.lang.forth
that I'm increasingly having a hard time believing is relevant, given
that most people-- especially the crowd here-- has direct experience
with.

I'm looking at putting together a survey-- not intended to be
scientific, but rather to help focus dicussion in comp.lang.forth to
the realities of software development that most developers face, not
the phantasmagoric constructions often discussed.

The Beez'

unread,
Jul 26, 2010, 2:36:53 AM7/26/10
to
On Jul 25, 8:28 pm, Elizabeth D Rather <erat...@forth.com> wrote:
> > Well, that's said, but is that true?  If you cut off the long tail of
> > unpopular systems, how many popular Forth compilers are there?  How does
> > that compare to popular C compilers?  A lot of people use GCC and a
> > Unix-like OS (mostly Linux) in the C world, and another lot uses MSVC
> > and Windows, that's the two most popular ones, and if you have seen one
> > of them, you have seen one of them, just like with a Forth system.

> Adding to these accurate and appropriate remarks, Chuck mainly promotes


> writing a program about three times, not a zillion, and he's hardly
> alone in this.  That third pass produces a good solution.  As time
> passes, further mods are necessary for enhancements and changing
> requirements, but that's a reality of software development, independent
> of language.

(d) The Forth community suffers from an absolute lack of humor and
cannot distinguish real criticism from over-the-top, tongue-in-cheek
remarks. Maybe because there is a hint of truth in them?

Hans Bezemer

Paul Rubin

unread,
Jul 26, 2010, 2:40:21 AM7/26/10
to
John Passaniti <john.pa...@gmail.com> writes:
> I'm speaking specifically about software development projects. What
> was presented in this thread was a 300-person team that were all
> *directly* contributing to the software. And I'm specifically
> bringing this up because it is a constant theme in comp.lang.forth
> that I'm increasingly having a hard time believing is relevant, given
> that most people-- especially the crowd here-- has direct experience
> with.

Many large open source projects like GCC, the Linux kernel, etc. have
that many direct contributors, though usually not all active at the
same time. I've been involved in a few of them. Does that count?

John Passaniti

unread,
Jul 26, 2010, 2:48:48 AM7/26/10
to
On Jul 25, 6:51 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> I have no specific antipathy to Java: I'm saying that there are
> pleasures to be had from both complex and simple systems.  Complexity
> has its disadvantages, and you have to weigh that against the
> convenience of some language features.  Simple systems are not just
> about speed and resource constraints.

The problem I always have with discussions like this is that words
like "complexity" and "simple" are loaded. What they mean varies
depending on one's priorities. For example, along one axis, a
language like Ruby might be considered very complex. But along
another axis, the solutions one can come up with in Ruby are simple.
Forth itself is like that-- Forth is a very simple language, but the
effort one has to put into creating the infrastructure to support even
basic solutions can be considered complex. I guess what bothers me is
these words are rarely given a context; it's as if complex and simple
are completely unambiguous standards by which a system can be
measured.

But anyway...

> > I see it as a spiritual fellow-traveller of Forth in some ways, so
> > it's worth knowing both.
>
> I can see that, especially for the simpler LISPs.  From what I've seen
> of Common LISP I'm not sure it really applies there, though.

You've probably heard this before, but advocates of Scheme (which is
considered a dialect of Lisp) are constantly amused that the entire
Scheme standard is shorter than the index to Guy Steele's "Common
Lisp: The Language."

Still, while I reject the oft-stated "Forth is reverse Polish Lisp", I
do agree with the somewhat poetic "spiritual fellow-traveler" idea.
Especially when you take a really small Lisp (for example,
http://piumarta.com/software/lysp/) and compare it to a really small
Forth.

John Passaniti

unread,
Jul 26, 2010, 2:57:25 AM7/26/10
to
On Jul 25, 9:23 am, "The Beez'" <hans...@bigfoot.com> wrote:
> On Jul 22, 10:44 pm, Helmar <hel...@gmail.com> wrote:
> Nonsence. Forth is broken by three issues and three issues only:
> (a) Forth requires you to balance your stack. Since most programmers
> are unable to balance their malloc()/free() calls and consequently
> need garbage collection, Forth is not bound to be popular. In short,
> cause 1: the lack of skilled programmers.

I really wish that when years ago the term "garbage collection" was
invented, they had come up with something that sounds less
pejorative. There is this notion that garbage collection in a
language somehow relates to waste or laziness or a lack of skill.
That is complete bunk. It's nothing more than an abstraction of
storage and it directly relates to a style of programming where the
programmer is free to capture and return execution contexts.

> (b) Forth has traditionally a high threshold because of the lack of
> documentation. Since Forth is only used by very skilled programmers
> they don't see why a skilled programmer would need documentation. The
> code is clear enough. In short, cause 2: you either desperately WANT
> to learn Forth or you give up because they is no easy way to pick it
> up.

So "very skilled programmers" don't need to document their work? What
you call "very skilled" can also be called "very lazy".

As for the rest of your statements, I have two words for you: sour
grapes.

The Beez'

unread,
Jul 26, 2010, 3:06:29 AM7/26/10
to
On Jul 26, 8:57 am, John Passaniti <john.passan...@gmail.com> wrote:
> On Jul 25, 9:23 am, "The Beez'" <hans...@bigfoot.com> wrote:
> I really wish that when years ago the term "garbage collection" was
> invented, they had come up with something that sounds less
> pejorative.  There is this notion that garbage collection in a
> language somehow relates to waste or laziness or a lack of skill.
> That is complete bunk.  It's nothing more than an abstraction of
> storage and it directly relates to a style of programming where the
> programmer is free to capture and return execution contexts.
As a matter of fact, GC is in real life programs even more efficient
than frequent malloc()/free() calls, contrary to expectation. That's
why I only allocate big chunks of memory, which makes balancing the
calls easier.

> So "very skilled programmers" don't need to document their work?  What
> you call "very skilled" can also be called "very lazy".

You're asking that to a guy whose compiler comes with a 450 page
manual?

> As for the rest of your statements, I have two words for you: sour
> grapes.

On the contrary, I find the subject and discussion rather amusing.

Hans Bezemer

John Passaniti

unread,
Jul 26, 2010, 3:17:52 AM7/26/10
to
On Jul 25, 2:01 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> In the case of the example I gave, there was no such 20/20 hindsight.  
> The original code was completely crap, no single line survived.  

I addressed that. Unless your claim is that you didn't even look at
the original code and had absolutely no knowledge of it's design, then
you most certainly did have the benefit of starting well before the
original developers did. It has nothing to do with how much code you
may have reused.

> The solution was fairly trivial:  I just did it.  And *then* I told the
> boss that it took me two weeks to do it, while he spoiled two
> "resources" and wasted a lot of budget (needless to say, so I didn't say
> it).  I remember some teeth gnashing, but I got a huge raise shortly
> afterwards, and no further questions about what I'm going to do.

Then you are in a unique situation where you apparently had more
autonomy than most do. So I'm not sure how relevant your experience
is to the rest of comp.lang.forth. Are you suggesting that others
faced with presenting a repair/rework/rewrite set of choices to a boss
should instead do whatever they think they think is best, and hope
that it all works out in the end?

> When I fail, I wasted two weeks.  When I don't try, I will waste three
> months on the broken code.  If you have any grasp of game theory, even a
> 10% chance of me succeeding is worth a try.

That depends entirely on the cost of failure. It appears you work in
a industry where if you waste two weeks, it doesn't really matter.
Not everyone does, and many people have this thing called a sense of
responsibility which says that when you're effectively spending other
people's money, you are held to a higher standard than "I thought it
was right, so I went ahead and did it." Specifically, you have to
prove your claims.

> > So the responsibility falls on you-- not your boss-- to prove your
> > claim...
>
> The proof of the pudding is its eating.

Again, you appear to be in an industry that can easily absorb a
failure. Must be nice. Since you apparently work outside many
developer's experience, would you like to make a suggestion that
relates to the real world?

> > I know that if I was your boss, I would *minimally* ask for
> > all of the following:
>
> > 1.  A clear statement of your approach.
> > 2.  A clear statement of what was wrong with the original design.
> > 3.  A rough task list, detailing the work you will do.
> > 4.  A set of milestones for the next two weeks.
>
> You seem to grow pointy hairs, as well.  Under your command, I expand
> the estimation to 6 weeks.  

Yes, I've seen your type before. You don't believe you have to
justify your estimates with anything more than "I'm brilliant, believe
what I say." And when you're faced with having to rationally justify
your claims, you pretend there is some enormous cost to such
justification.

> All of those was presented to the customer; as it was a one-hour
> presentation, it was not very detailed.  If you make outrageous
> claims, they will raise eyebrows, even when you can fully support
> them.  Sorry, that's just normal psychology.

Not in my experience. An outrageous claim related to software
development that is supported by fact is by definition not
outrageous.

Paul Rubin

unread,
Jul 26, 2010, 3:38:14 AM7/26/10
to
John Passaniti <john.pa...@gmail.com> writes:
> You've probably heard this before, but advocates of Scheme (which is
> considered a dialect of Lisp) are constantly amused that the entire
> Scheme standard is shorter than the index to Guy Steele's "Common
> Lisp: The Language."

I'd say the core language of CL isn't all that much more complicated
than Scheme, and in some ways it's easier to implement (no first-class
continuations). It has some creature comforts like keyword arguments
that bloat up the language and manual somewhat, but aren't that
conceptually complicated and which do make programming more pleasant.
Most of the CL manual basically describes library functions, and the
Scheme reports used to ignore that issue. The latest Scheme report
(R6RS) addresses the issue though.

> Especially when you take a really small Lisp (for example,
> http://piumarta.com/software/lysp/) and compare it to a really small
> Forth.

That is pretty cute, though very minimal. I think most Lisp hackers
have written one of those at one time or another, just like most Forth
hackers have probably written their own Forth.

For embedded Lisp I've been interested in this:

http://hedgehog.oliotalo.fi/

It is a purely functional lisp dialect with a fairly smart bytecode
compiler written in C. The runtime embedded part is a bytecode
interpreter that fits in around 20 kbytes, including a generational gc
and some nice builtin libraries such as functional dictionaries based on
AVL trees. Its authors used it in a number of embedded products in the
early 2000's so it has some evolved practicality, if that makes any
sense.

Forth hackers interested in Lisp for desktop applications might also
check out www.picolisp.org. It is bigger than a tiny Lisp but more
traditional than Scheme and smaller than CL.

Paul Rubin

unread,
Jul 26, 2010, 4:22:32 AM7/26/10
to
John Passaniti <john.pa...@gmail.com> writes:
> No, not really. In the industry I'm in (networked digital audio and
> related signal processing, primarily for sound reinforcement
> applications),

I wonder whether you could do just about all the computing and network
stuff side of that on a cheap PC motherboard, with the signal processing
done in software. PC's are very fast these days (especially with
graphics accelerators), memory is cheap, you could use the hard drive as
a digital recorder and have a GUI mixing console, etc. Your custom
hardware would just be ADC/DAC's, power controllers and so forth,
stuffed in a box with the motherboard, connected internally by firewire
or whatever.

> I don't work in safety-critical systems,

Given the multimedia stuff you're involved with, you may soon have to be
thinking about security if you're not already. That has elements in
common with safety-critical systems in the sense that you have to defend
against what would normally considered extremely unlikely events. In
the safety case it's because the unlikely event could happen by accident
and kill someone; in the security case because an attacker can concoct a
way to make the "unlikely" event happen on purpose. Googling "jpeg
buffer overflow" may indicate what you're up against.

> One of my current projects is a multichannel remote control. ...


> Right now, it has bi-color LEDs on each channel, so I can light up
> each channel red, green, or orange. But who knows-- they might come
> back and say they need want a tri-color LED in order to display more
> states. That will require a hardware change. Or they might require
> that it communicates with more than one device over Ethernet.

Why build custom hardware for this at all? You could code it as a
smartphone (Android, Maemo, Iphone) application, with a mixer-like GUI
communicating with the main system by wifi (or USB/Ethernet if you
must), so the the user can see everything going on in the system and
control it from anywhere in the room. It's unfortunate that the phones'
microphones aren't good enough to use the phone as a full-range spectrum
analyzer, but maybe they could be usable as a basic SPL meter that would
help the engineers walk around and adjust the mix for room effects.

> So when I discovered during my testing that a particular set of
> extreme settings on a feedback suppressor, compressor, and parametric
> EQ could cause a weird effect on the audio, I documented that as a quirk.

I'd hope that a little Matlab modelling plus doing the signal processing
in floating point PC arithmetic could prevent most problems like this,
but I've never done anything like it.

The stuff you're doing sounds pretty cool but I'm still trying to figure
out if you use Forth in any of it.

Paul Rubin

unread,
Jul 26, 2010, 4:28:13 AM7/26/10
to
"The Beez'" <han...@bigfoot.com> writes:
> As a matter of fact, GC is in real life programs even more efficient
> than frequent malloc()/free() calls, contrary to expectation. That's
> why I only allocate big chunks of memory, which makes balancing the
> calls easier.

You're describing "region based memory management", which is a useful
and effective technique, but not a cure-all. Some applications will
involve a lot of objects with unpredictable lifetimes. So having a gc
around will still save you headache.

Andrew Haley

unread,
Jul 26, 2010, 4:53:32 AM7/26/10
to
John Passaniti <john.pa...@gmail.com> wrote:
> On Jul 25, 6:51?am, Andrew Haley <andre...@littlepinkcloud.invalid>

> wrote:
>> I have no specific antipathy to Java: I'm saying that there are
>> pleasures to be had from both complex and simple systems. Complexity
>> has its disadvantages, and you have to weigh that against the
>> convenience of some language features. Simple systems are not just
>> about speed and resource constraints.
>
> The problem I always have with discussions like this is that words
> like "complexity" and "simple" are loaded. What they mean varies
> depending on one's priorities. For example, along one axis, a
> language like Ruby might be considered very complex. But along
> another axis, the solutions one can come up with in Ruby are simple.
> Forth itself is like that-- Forth is a very simple language, but the
> effort one has to put into creating the infrastructure to support even
> basic solutions can be considered complex.

It's been argued here that the trouble with Forth is that it's missing
all this stuff that is provided in other languages that you really
need to write programs, so any Forth application has to be more
complex because the application code also has to provide all this
stuff. But the unstated assumption there is that this other stuff is
needed by the application. Maybe it is, maybe not.

> I guess what bothers me is these words are rarely given a context;
> it's as if complex and simple are completely unambiguous standards
> by which a system can be measured.

True, there is not a single standard for simplicity, but it is quite
undeniable that a Forth system that perhaps consists of some tens of
pages of source code is simple. It is hard to escape the conclusion
that if an application is created by extending such a basic system
with just what is required, then the resulting system will also be
simple, considered as a whole.

>> > I see it as a spiritual fellow-traveller of Forth in some ways, so
>> > it's worth knowing both.
>>
>> I can see that, especially for the simpler LISPs. From what I've seen
>> of Common LISP I'm not sure it really applies there, though.
>
> You've probably heard this before, but advocates of Scheme (which is
> considered a dialect of Lisp) are constantly amused that the entire
> Scheme standard is shorter than the index to Guy Steele's "Common
> Lisp: The Language."

Indeed. :-)

> Still, while I reject the oft-stated "Forth is reverse Polish Lisp", I
> do agree with the somewhat poetic "spiritual fellow-traveler" idea.
> Especially when you take a really small Lisp (for example,
> http://piumarta.com/software/lysp/) and compare it to a really small
> Forth.

Oh lovely, thanks. Many times I've been tempted to write a small
implementation of LISP 1.5 with the wrinkles ironed out. Nice to see.

Andrew.

Paul Rubin

unread,
Jul 26, 2010, 5:21:02 AM7/26/10
to
Krishna Myneni <krishna...@ccreweb.org> writes:
> applications built from thousands of lines of Forth code. Such
> applications can of course be done in C/C++,

Well, C has a much different style than Forth, but I think of them both
as low level languages, so a more proper comparison would be with
Ruby or Python or something.

> For example, I may have to interactively tune a proportionality
> constant in a PI control loop. Now, it's a simple matter of
> interactively changing the value of a variable from the Forth
> prompt.

Sometimes I do stuff like that for C programs with gdb.

> There's no need to write a lot of meaningless user interface code,
> which would consume hundreds or possibly thousands of lines of code.

Oh, that's an overstatement even for pure C. Use scanf, use a library
like GNU readline, etc.

> However, I believe the notions that Forth is only for small systems
> and only for very small applications, are not generally valid --- the
> application is the important consideration.

Fair enough, though in desktop or server programming these days, a few
thousand lines is a small program.

Stephen Pelc

unread,
Jul 26, 2010, 5:32:19 AM7/26/10
to
On Fri, 23 Jul 2010 23:17:59 -0700, Paul Rubin
<no.e...@nospam.invalid> wrote:

>I haven't (so far) had much to do with small cheap custom hardware and I
>don't understand the economics of it that well. I do know that desktop
>computer RAM costs about 3 cents per megabyte, so it seems odd that
>micros with more than a few kilobytes are quite expensive.

Microcontrollers tend to use very conservative processes. Freescale
are moving some controllers from a 250nm process to a 90nm process.
For desktop stuff 45nm is already yesterday's process. The cost of a
piece of silicon starts with a very complicated calculation. One of
the major influences is the area of silicon used.

Desktop RAM comes in single purpose RAM built on a silicon process
optimised for use as desktop RAM, using one transistor per bit.
In most cases, it uses a very small geometry.

Microcontroller RAM is typically static and uses six transistors
per bit. It is built on a silicon process optimised for a number
of uses. Even for 32 bit microcontrollers, it is rare to get more
than 64 kbytes of RAM, and there's usually a price premium over
16 kbytes.

High speed static RAM chips for embedded use are grotesquely
expensive. What is increasingly used is SDRAM, but the smallest is
around 32 Mbytes and costs much less than a high speed 128 kbyte
static RAM chip.

Stephen

--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

Paul Rubin

unread,
Jul 26, 2010, 5:36:58 AM7/26/10
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
> True, there is not a single standard for simplicity, but it is quite
> undeniable that a Forth system that perhaps consists of some tens of
> pages of source code is simple.

I don't know about that. It could be much harder to audit such a system
for (say) storage management bugs, than a much larger system written in
a type-safe, gc'd language. I'd consider that to be a form of
complexity.

Andrew Haley

unread,
Jul 26, 2010, 5:51:13 AM7/26/10
to

I'm considering the system as a whole.

Andrew.

Paul Rubin

unread,
Jul 26, 2010, 6:07:05 AM7/26/10
to
steph...@mpeforth.com (Stephen Pelc) writes:
> Microcontrollers tend to use very conservative processes. Freescale
> are moving some controllers from a 250nm process to a 90nm process.

Hmm, that's interesting. Is that for economy reasons, or because of
issues like EMI resistance? I remember noticing Chuck Moore's
processors are in 180nm, and I figured it was because he couldn't yet
afford a newer process, but maybe there's other issues I didn't think
of. While his chip is a presumably carefully designed completely custom
ASIC, I wondered if the same processor could be synthesized on a fancy
FPGA and be faster just because the FPGA is made in 45nm or whatever.

> Microcontroller RAM is typically static and uses six transistors

> per bit.... Even for 32 bit microcontrollers, it is rare to get more


> than 64 kbytes of RAM, and there's usually a price premium over
> 16 kbytes.

Good point, though maybe the numbers are a bit outdated? I believe that
the cpu in my Sansa mp3 player has am ARM core and 320 kbytes of ram.
That is cheap consumer electronics, so as such, it might not meet the
EMI requirements of industrial or automotive electronics. That might
let them get away with a less conservative process.

Also, desktop processors have sometimes tens(?) of MB of SRAM cache.

Krishna Myneni

unread,
Jul 26, 2010, 8:35:06 AM7/26/10
to
On Jul 26, 4:21 am, Paul Rubin <no.em...@nospam.invalid> wrote:

> Krishna Myneni <krishna.myn...@ccreweb.org> writes:
> > applications built from thousands of lines of Forth code. Such
> > applications can of course be done in C/C++,
>
> Well, C has a much different style than Forth, but I think of them both
> as low level languages, so a more proper comparison would be with
> Ruby or Python or something.
>
> > For example, I may have to interactively tune a proportionality
> > constant in a PI control loop. Now, it's a simple matter of
> > interactively changing the value of a variable from the Forth
> > prompt.  
>
> Sometimes I do stuff like that for C programs with gdb.
>

Frankly, running a C program out of a debugger has very little in
comparison with the interactivity provided by a Forth environment,
when it comes to flexibility. I think you've mentioned before about
setting variables, and executing a function by passing args to it.
But, to give one example, what about writing a function on the fly in
high-level code, and executing it?

> > There's no need to write a lot of meaningless user interface code,
> > which would consume hundreds or possibly thousands of lines of code.
>
> Oh, that's an overstatement even for pure C.  Use scanf, use a library
> like GNU readline, etc.
>

Parsing complex single line commands consisting of arguments and
command names can take more than a handful of lines of code. Of course
you can prompt for all arguments, but that gets to be annoying quickly
for an experienced user. Also, don't forget the huge switch statement
that's needed to actually execute the command. When I say that even
thousands of lines of code may be needed, I'm thinking of a GUI
interface.

> > However, I believe the notions that Forth is only for small systems
> > and only for very small applications, are not generally valid --- the
> > application is the important consideration.
>
> Fair enough, though in desktop or server programming these days, a few
> thousand lines is a small program.

The point was not so much that the application was not small, but that
the application required system resources which would not be available
in the type of systems to which you were referring earlier (those with
memory constraints on the order of 100K) as the useful targets for
Forth code.

Krishna

Aleksej Saushev

unread,
Jul 26, 2010, 4:08:36 PM7/26/10
to
John Passaniti <john.pa...@gmail.com> writes:

> On Jul 25, 6:51 am, Andrew Haley <andre...@littlepinkcloud.invalid>
> wrote:
>>
>> I can see that, especially for the simpler LISPs.  From what I've seen
>> of Common LISP I'm not sure it really applies there, though.
>
> You've probably heard this before, but advocates of Scheme (which is
> considered a dialect of Lisp) are constantly amused that the entire
> Scheme standard is shorter than the index to Guy Steele's "Common
> Lisp: The Language."

You should not ignore arguments of the opposite side, they are relevant here.

The point is that Common Lisp is actually smaller than Scheme, because
Common Lisp standard describes a lot more than just the core language
like e.g. R5RS does. In other words, don't forget to add more than a
hundred of SRFIs to Scheme standard when comparing to Common Lisp.


--
HE CE3OH...

Paul E. Bennett

unread,
Jul 26, 2010, 4:36:33 PM7/26/10
to
Paul Rubin wrote:

> John Passaniti <john.pa...@gmail.com> writes:
>> No, not really. In the industry I'm in (networked digital audio and
>> related signal processing, primarily for sound reinforcement
>> applications),

[%X]


>> I don't work in safety-critical systems,
>
> Given the multimedia stuff you're involved with, you may soon have to be
> thinking about security if you're not already. That has elements in
> common with safety-critical systems in the sense that you have to defend
> against what would normally considered extremely unlikely events. In
> the safety case it's because the unlikely event could happen by accident
> and kill someone; in the security case because an attacker can concoct a
> way to make the "unlikely" event happen on purpose. Googling "jpeg
> buffer overflow" may indicate what you're up against.
>
>> One of my current projects is a multichannel remote control. ...
>> Right now, it has bi-color LEDs on each channel, so I can light up
>> each channel red, green, or orange. But who knows-- they might come
>> back and say they need want a tri-color LED in order to display more
>> states. That will require a hardware change. Or they might require
>> that it communicates with more than one device over Ethernet.
>
> Why build custom hardware for this at all? You could code it as a
> smartphone (Android, Maemo, Iphone) application, with a mixer-like GUI
> communicating with the main system by wifi (or USB/Ethernet if you
> must), so the the user can see everything going on in the system and
> control it from anywhere in the room. It's unfortunate that the phones'
> microphones aren't good enough to use the phone as a full-range spectrum
> analyzer, but maybe they could be usable as a basic SPL meter that would
> help the engineers walk around and adjust the mix for room effects.

I think you will find John works with the professional audio/video end of
the market (bands, live broadcasts, studio recording etc). Custom is quite
often the way to go there.

[%X]

--
********************************************************************
Paul E. Bennett...............<email://Paul_E....@topmail.co.uk>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

Hugh Aguilar

unread,
Jul 26, 2010, 5:29:16 PM7/26/10
to
On Jul 24, 3:19 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
wrote:
> John Passaniti wrote:
> > On Jul 23, 7:34 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
> >> > Why are you against documentation.
>
> >> Documentation after the software is written is a good idea, and the
> >> sooner the better.
>
> I write no documentation after the software is written. It is much too late
> by then as once the software is written and tested it is out the door and
> the next project beckons.
>
> >> Documentation *before* the software is written does more harm than
> >> good.

When I was working as a Forth programmer, I was expected to write
documentation for my software after I wrote it. I'm not talking about
comments in the source-code, which are done during the writing of the
program, but of a document describing the program. This is actually
how I originally got involved in LaTeX. I wrote such a document for
MFX (using Borland Sprint, which is based on Scribe, which is based on
LaTeX) even though I knew that there was no next project and that I
was soon to be on the street looking for work again. I was also well
aware that a reference from Testra is about as useful as a reference
from Burger King --- experience as a Forth programmer is worthless in
regard to getting a job as a computer programmer. I was just being
loyal. Besides that, I was proud of the program and I wanted to
document what I had done.

Most of the time when programmers don't write such a document (or
purposely writes a useless document), it is not because "the next
project beckons" --- it is because they want job security by being the
only person who knows how the program works --- they are hanging on to
that project forever and avoiding any next project that may beckon.
I'm not like that. I'm proud of MFX, but I don't want to make a career
out of maintaining it --- other people are maintaining it now. Of
course, the downside of that plan is that I'm driving a cab now. When
I worked at Testra though, I only made $10/hour and, as the projects
came to a conclusion my hours would dwindle down to 20 and then 10
hours per week. I can make more money driving a cab than working as a
Forth programmer. Being a Forth programmer is mostly just something
that I do for fun; even when I'm getting a paycheck for it, the
principle benefit is good-old fun.

What I said about documentation written *before* the software is
written, doing more harm than good, was a reference to how some people
are afraid of writing software and try to avoid doing so by writing
documentation for vapor-ware instead. Programming is somewhat of a
harsh world to live in, because the programs have to actually *work*.
If your program doesn't work, then it is worthless, no matter how good
it looks, or how much research was done on the algorithms, or how
idiomatic the coding style is, etc., etc.. People who have a lot of
difficulty in getting their program to work will typically try to
promote themselves into the job of systems analyst. They are
constantly looking busy by doing Important Design Work (woo hoo!), but
they don't actually write software. They also routinely criticize
software that other people have written, especially when they are
within earshot of the boss. The idea is to generate a lot of
controversy and argument in regard to choice of algorithm, coding
style, etc., and this is all a big distraction from the fact that they
themselves have never written any software. Also, if the boss is non-
technical, it may be possible to convince her of their expertise in
programming, in so much as being able at a glance to spot the myriad
problems in other people's software.

Here is an example of this strategy:

On Apr 15, 1:39 am, John Passaniti <john.passan...@gmail.com> wrote:
> Yes, it sucks for what you intended it to be used for.  It may have
> other uses, but as a symbol table, I can confidently say it isn't very
> good.
> ...
> > To the best of my knowledge, nobody has ever figured
> > out how my symtab algorithm works.
>
> How symtab works doesn't matter.  What matters is the flawed
> fundamental assumptions that go into the design.
>
> > All the criticism revolved around
> > what symtab is *not* --- not a splay tree and not a hash table ---
> > rather than on what symtab *is*.
>
> Maybe others, but my criticism was on your flawed fundamental
> assumptions about the distribution of symbols in real-world data sets.
>
> Unfortunately for you, the comp.lang.forth
> community cares more about thinking about how to solve problems and
> actually testing their solutions than wildly spewing code.  

Let me tell you about a creature that is called a "pilot fish." This
is a small fish that bites into the back of a large fish and doesn't
let go. Its jaw fuses with the vertebra of the big fish and it stays
there forever as a parasite. It is called a "pilot fish" because,
quite humorously, it appears to be a pilot riding on top of the big
fish and steering the big fish around. The exact opposite is true of
course, because it is just along for the ride and it is not
controlling the big fish. The analogy here is pretty clear; the
programmers-who-don't-program are the pilot fish. They are just along
for the ride, but they are pretending to be steering the project. If
the project is a success, they take credit for it on the basis that
their *Important Design Work* (woo hoo!) was the crucial factor in the
project's success.

I'm reminded of what a book author once said about critics, that they
are like eunuchs in a harem --- they know how it is done, they have
seen it done many times, but they can't do it themselves. LOL

Bernd Payson has already stated the same idea (although he used the
term "sissy" whereas I use the term "eunuch"):

On Apr 16, 11:02 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> John Passaniti wrote:
> > Texas?  I think Hugh is from Colorado.  8-)
>
> Close enough ;-).
>
> > When I'm at home, happily typing away on personal projects, the time
> > wasted by not thinking first is something I can easily absorb.  But
> > when I'm at work, that wasted time is someone's money.  Worse, when
> > that wasted time means a product ship date either slips or desired
> > features have to be removed to meet a deadline, that's an opportunity
> > cost.
>
> You don't get it.  This is not wasted time, this is just *perceived* as
> wasted time, because you throw something away.  Coding is quick.  Testing
> with real data is also quick.  Thinking ahead is slow and error-prone,
> because it lacks feedback, and it also lacks the reward of having something
> working early.
>
> I don't start coding before I have a rough idea how to implement it - this
> is obvious.  But I certainly start coding before I have a proof that this
> will be it.  The proof of the pudding is its eating, if you are a cook, and
> you are musing about whether a pudding is good or not *before even starting*
> with making it, you are fooling yourself.  When the pudding isn't good, feed
> it to the pigs, and try again.
>
> You usually can't tell that you are approaching problems like that to your
> customer.  Solution: You don't.  But again: you are not wasting time when
> you try out things and throw bad results away.  Not at all.  You are wasting
> time by *not* exploring your solution space, and by *not* throwing bad
> results away, but sticking to them.
>
> > I'm going to take a wild guess here.  When you spent the time to code
> > all three algorithms, you didn't have a half-million dollars of
> > electronics components sitting on a shipping dock waiting for your
> > software.
>
> No.  It took a few hours to do that.  A few hours, John, and they were *not*
> wasted in any way.  We have a big customer who's just as sissy as you are
> here.  It doesn't matter if there are multi-billion dollars at risk, and
> sissy managers in your back, or if it's a hobby project.  All my hobby
> projects are *much more* time-constrained than my work projects, because my
> time is already mostly eaten up by work hours, sleep, and R&R.
>
> > And I'm going to guess that as you were running your tests
> > to compare results, you didn't have a production manager staring at a
> > list of employees, trying to determine who should be cut if the
> > software didn't hit the deadline.
>
> Indeed.  But the only times I was struggling with deadlines was when I was
> early in my career and thought I should do what I'm ordered to literally.  I
> wasted 2 or 3 months with a broken I²C design, inherited from a university
> cooperation where they had spent already half a year on it, and two ex-
> coworkers spent another 6 man-months on it.  The next project that asked for
> an I²C, I did it differently.  I threw the crap away, and had a sane
> interface within a week - by just deploying state of the art agile
> techniques - test driven, transparent, simple design.
>
> --
> Bernd Paysan
> "If you want it done right, you have to do it yourself!"http://www.jwdt.com/~paysan/

Paul Rubin

unread,
Jul 26, 2010, 10:12:10 PM7/26/10
to
"Paul E. Bennett" <Paul_E....@topmail.co.uk> writes:
> I think you will find John works with the professional audio/video end of
> the market (bands, live broadcasts, studio recording etc). Custom is quite
> often the way to go there.

True, and I'm just a software guy riffing. I do a little bit of
recreational live recording, not with anything like professional gear,
and not any amplification etc, so I don't claim any exposure to the type
of stuff John makes. But I just see over and over again how the audio
industry (or other industries) make some totally underpowered product,
then over a period of many years incrementally improves it to what it
obviously should have been in the first place. So John mentions he's
building a gadget with a 2-color led, but then his competition next year
might pull ahead by making a model with a 3-color led, and then John
will have to top it by using a separate led for each channel, and then
the competition will build one with a bar graph, and so forth. It's
obvious what the end point of that progression is, and it can be built
today in pure software with relatively little fuss. So why not just
skip the intermediate steps and go straight to the product that people
actually want?

I'm sort of re-living my own experience waiting for the development of a
sane digital recorder after so many iterations of stupidly crippled ones
an artificial technical obstacles. Anyway, this is just a rant, even
more off-topic than most of the rest of the discussion, you know how it
goes.

Paul Rubin

unread,
Jul 26, 2010, 10:44:45 PM7/26/10
to
Krishna Myneni <krishna...@ccreweb.org> writes:
> Frankly, running a C program out of a debugger has very little in
> comparison with the interactivity provided by a Forth environment,
> when it comes to flexibility. I think you've mentioned before about
> setting variables, and executing a function by passing args to it.
> But, to give one example, what about writing a function on the fly in
> high-level code, and executing it?

Well, you're changing the requirements in mid-conversation, so it's of
course possible to steer it any which way by doing that. But gdb's
macro capability can be of help for stuff that you might otherwise do
with interactive patching, though of course it's just a partial
substitute.

> Parsing complex single line commands consisting of arguments and
> command names can take more than a handful of lines of code. Of course
> you can prompt for all arguments, but that gets to be annoying quickly
> for an experienced user. Also, don't forget the huge switch statement
> that's needed to actually execute the command.

Nah, if you use readline and a command parsing library, it's maybe
a few dozen lines. Instead of a switch statement you'd typically
register each command with a callback.

> When I say that even thousands of lines of code may be needed, I'm
> thinking of a GUI interface.

That's still not 1000's of lines even in C, though doing it in C is
quite painful. In Python (and similar languages, Python is just what
I'm most used to these days) it's pretty easy. You can also easily
embed a web server in a Python script, so other people can use
your program remotely without having to install anything, and you
can upgrade the software at any time, etc.

> The point was not so much that the application was not small, but that
> the application required system resources which would not be available
> in the type of systems to which you were referring earlier (those with
> memory constraints on the order of 100K) as the useful targets for
> Forth code.

Well, what you're familiar with and comfortable with counts for a lot.
I'd just expect much higher development and debugging time for a Forth
or C level language than for a similar program in a Python or Lisp level
language, if machine resources weren't too constrained.

Paul Rubin

unread,
Jul 26, 2010, 11:10:38 PM7/26/10
to
"Paul E. Bennett" <Paul_E....@topmail.co.uk> writes:
> I think you will find John works with the professional audio/video end of
> the market (bands, live broadcasts, studio recording etc). Custom is quite
> often the way to go there.

True, and I'm just a software guy riffing. I do a little bit of


recreational live recording, not with anything like professional gear,
and not any amplification etc, so I don't claim any exposure to the type
of stuff John makes. But I just see over and over again how the audio
industry (or other industries) make some totally underpowered product,
then over a period of many years incrementally improves it to what it
obviously should have been in the first place.

I get the following picture in my imagination (of course it's not
necessarily reality). John mentions he's building a gadget with a


2-color led, but then his competition next year might pull ahead by
making a model with a 3-color led, and then John will have to top it by
using a separate led for each channel, and then the competition will
build one with a bar graph, and so forth. It's obvious what the end
point of that progression is, and it can be built today in pure software
with relatively little fuss. So why not just skip the intermediate
steps and go straight to the product that people actually want?

I'm sort of re-living my own experience waiting for the development of a
sane digital recorder after so many iterations of stupidly crippled ones

an artificial technical obstacles. Plus, I'm very impressed with recent
phone handsets--they are equivalent to laptop computers from not all
that long ago.

Anyway, this is just a rant, even more off-topic than most of the rest

of the discussion, you know how it goes. (This message is slightly
edited from an earlier version that I cancelled due to unclarity, though
the cancel may not propagate).

Krishna Myneni

unread,
Jul 27, 2010, 8:17:50 AM7/27/10
to
On Jul 26, 9:44 pm, Paul Rubin <no.em...@nospam.invalid> wrote:

> Krishna Myneni <krishna.myn...@ccreweb.org> writes:
> > Frankly, running a C program out of a debugger has very little in
> > comparison with the interactivity provided by a Forth environment,
> > when it comes to flexibility. I think you've mentioned before about
> > setting variables, and executing a function by passing args to it.
> > But, to give one example, what about writing a function on the fly in
> > high-level code, and executing it?
>
> Well, you're changing the requirements in mid-conversation, so it's of
> course possible to steer it any which way by doing that.  But gdb's
> macro capability can be of help for stuff that you might otherwise do
> with interactive patching, though of course it's just a partial
> substitute.
>

We're just discussing the benefits of having the Forth environment. I
just find it hard to believe that using a debugger can approach the
level of flexibility of the Forth environment. I've used debuggers in
the past, when developing in C/C++ (even the GUI-based debugger in
Visual C++ a long long time ago), and gdb a bit, some time ago.

> > Parsing complex single line commands consisting of arguments and
> > command names can take more than a handful of lines of code. Of course
> > you can prompt for all arguments, but that gets to be annoying quickly
> > for an experienced user. Also, don't forget the huge switch statement
> > that's needed to actually execute the command.
>
> Nah, if you use readline and a command parsing library, it's maybe
> a few dozen lines.  Instead of a switch statement you'd typically
> register each command with a callback.  
>

Ok. Even then, why bother? With Forth code, a primitive set of
application specific commands is provided free of charge. With only
slight additional effort, the command syntax can be tailored to my
liking, e.g. having the command arguments be specified in infix
notation. An example is my simple notes database application,

ftp://ccreweb.org/software/kforth/examples/notes.4th


> > When I say that even thousands of lines of code may be needed, I'm
> > thinking of a GUI interface.
>
> That's still not 1000's of lines even in C, though doing it in C is
> quite painful.  In Python (and similar languages, Python is just what
> I'm most used to these days) it's pretty easy.  You can also easily
> embed a web server in a Python script, so other people can use
> your program remotely without having to install anything, and you
> can upgrade the software at any time, etc.
>

Here, your argument is based on making use of existing libraries,
source or compiled. Forth does not have the same vast resource of
existing libraries as some of the popular languages. But, libraries do
exist, such as the FSL and FFL. And, as Bernd Paysan has demonstrated,
web server scripts in Forth are no problem to write. It's also
relatively easy to write custom client/server scripts. When I look at
languages such as Perl, I see that many of the libraries are little
more than bindings to existing C libraries. Forth does lack a uniform
model for accessing external pre-compiled libraries. However, several
Forth systems (the commercial systems, and gforth, bigForth, kForth,
to name a few free ones) do provide methods to import external library
code. The bindings do need to be written though, and because there is
no uniformity, the resulting library interface code is not portable.


> > The point was not so much that the application was not small, but that
> > the application required system resources which would not be available
> > in the type of systems to which you were referring earlier (those with
> > memory constraints on the order of 100K) as the useful targets for
> > Forth code.
>
> Well, what you're familiar with and comfortable with counts for a lot.
> I'd just expect much higher development and debugging time for a Forth
> or C level language than for a similar program in a Python or Lisp level
> language, if machine resources weren't too constrained.

I can't refute your expectation that there is a higher development
time in Forth than in Python or Lisp, for a similar program. Not
because I believe it, but because such a statement is highly dependent
on the application. Also, I can't claim to have tried to do the same
non-trivial program in several languages. And, of course, your
statement indicates the same is true for you. High-level popular
languages have the benefit that some types of problems can be coded in
less time, either due to the built-in features of the language, or due
to the existence of large library support. Nevertheless, Forth works
well for me in some of the problem domains in which I work. If you're
curious about why people use Forth (are we just old-timer holdouts,
ignorant of the rest of the world, or are we working with something
special?), the only real remedy is to try to apply Forth for your own
application needs. I don't expect such a question will have a clear
cut answer, but I don't believe it will be answerable by endless
discussions about language X vs language Y.

Cheers,
Krishna


Krishna

Bernd Paysan

unread,
Jul 27, 2010, 5:59:53 AM7/27/10
to
John Passaniti wrote:

> On Jul 25, 2:01 pm, Bernd Paysan <bernd.pay...@gmx.de> wrote:
>> In the case of the example I gave, there was no such 20/20 hindsight.
>> The original code was completely crap, no single line survived.
>
> I addressed that. Unless your claim is that you didn't even look at
> the original code and had absolutely no knowledge of it's design, then
> you most certainly did have the benefit of starting well before the
> original developers did. It has nothing to do with how much code you
> may have reused.

Sorry, John, crap code is crap code, and you can't learn anything from that
- unless you are a novice, and you want to be taught by bad example (avoid
this, avoid that, it all leads to problems). I already knew to avoid the
mistakes they did, and I thought those prevention means were common
practice.

There was some experience from debugging the previous design that went into
the new one: When I had to debug that crappy stuff, I learned to my horror
that they haven't even written a testbench to exercise the code. I wrote
that testbench (took a few days). It also served as testbench for the new
code, and a number of design decisions on the testbench were also made for
the code.

I don't count that into "learning from the old code", since it was my
testbench (not inspired in any ways from the code), and writing a testbench
prior to writing the actual code is good practice, anyways.

> Then you are in a unique situation where you apparently had more
> autonomy than most do. So I'm not sure how relevant your experience
> is to the rest of comp.lang.forth. Are you suggesting that others
> faced with presenting a repair/rework/rewrite set of choices to a boss
> should instead do whatever they think they think is best, and hope
> that it all works out in the end?

If you just mindlessly do what you boss tells you, you shouldn't be an
engineer. My boss back then wasn't a control freak, and as engineer, you
need to have judgment about what you do. It may not be easy to explain
without demonstration, and for a demonstration, you have to put significant
work into it.

Helmar

unread,
Jul 27, 2010, 11:47:04 AM7/27/10
to
John,

"But the fact it was built on a Forth doesn't mean that I've
eliminated 'semantic layers'."

My point definitively was not to "eliminate" semantic layers. I think
semantic layers are good. In Forth you can construct them with on-
board things (like driving on the yacht of the big boss - if something
is missing, he orders the helicopter (but Forth does not need it)).

The domain specific languages are something you can uniquely create in
Forth - at least it's unique to have such a powerful concept in back.
In some C-like languages (say Java) you use a compiler-compiler to get
something to parse configuration files. Would not you try to express
the things in Forth first? At least if you are able to design the
application? I know this is something strange for industrial needs -
but definitively a fail of culture and not a fail of concepts.

Regards,
-Helmar

Aleksej Saushev

unread,
Jul 27, 2010, 12:38:59 PM7/27/10
to
Helmar <hel...@gmail.com> writes:

> The domain specific languages are something you can uniquely create in
> Forth - at least it's unique to have such a powerful concept in back.
> In some C-like languages (say Java) you use a compiler-compiler to get
> something to parse configuration files. Would not you try to express
> the things in Forth first? At least if you are able to design the
> application? I know this is something strange for industrial needs -
> but definitively a fail of culture and not a fail of concepts.

Sorry? What do you call "C-like"? Java isn't C-like, you don't get
segmentation fault because of buffer overrun or type mismatch.
It isn't hard to express all what you want in XML, which is supported by
Java (unlike Forth).


--
HE CE3OH...

Helmar

unread,
Jul 27, 2010, 3:14:54 PM7/27/10
to
On 27 Jul., 18:38, Aleksej Saushev <a...@inbox.ru> wrote:
> Helmar <hel...@gmail.com> writes:
> > The domain specific languages are something you can uniquely create in
> > Forth - at least it's unique to have such a powerful concept in back.
> > In some C-like languages (say Java) you use a compiler-compiler to get
> > something to parse configuration files. Would not you try to express
> > the things in Forth first? At least if you are able to design the
> > application? I know this is something strange for industrial needs -
> > but definitively a fail of culture and not a fail of concepts.
>
> Sorry? What do you call "C-like"? Java isn't C-like, you don't get
> segmentation fault because of buffer overrun or type mismatch.

You think a language is similar because of the errors it can produce?
Sorry, you are not serious.

> It isn't hard to express all what you want in XML, which is supported by
> Java (unlike Forth).

Why you have the wrong idea that in Forth you can not parse XML? Is
this some basic misunderstanding from your side or did not you find
something to copy/paste?

Regards,
-Helmar


> --
> HE CE3OH...

Aleksej Saushev

unread,
Jul 27, 2010, 3:34:19 PM7/27/10
to
Helmar <hel...@gmail.com> writes:

> On 27 Jul., 18:38, Aleksej Saushev <a...@inbox.ru> wrote:
>> Helmar <hel...@gmail.com> writes:
>> > The domain specific languages are something you can uniquely create in
>> > Forth - at least it's unique to have such a powerful concept in back.
>> > In some C-like languages (say Java) you use a compiler-compiler to get
>> > something to parse configuration files. Would not you try to express
>> > the things in Forth first? At least if you are able to design the
>> > application? I know this is something strange for industrial needs -
>> > but definitively a fail of culture and not a fail of concepts.
>>
>> Sorry? What do you call "C-like"? Java isn't C-like, you don't get
>> segmentation fault because of buffer overrun or type mismatch.
>
> You think a language is similar because of the errors it can produce?
> Sorry, you are not serious.

That's what I'm asking you, do you really think that slight similarity in
syntax means that two languages are similar?

>> It isn't hard to express all what you want in XML, which is supported by
>> Java (unlike Forth).
>
> Why you have the wrong idea that in Forth you can not parse XML? Is
> this some basic misunderstanding from your side or did not you find
> something to copy/paste?

Can you bring any XML parser in Forth _right_now_?
Or is it only a theoretical observation?


--
HE CE3OH...

Paul Rubin

unread,
Jul 27, 2010, 3:35:05 PM7/27/10
to
Helmar <hel...@gmail.com> writes:
>> Sorry? What do you call "C-like"? Java isn't C-like, you don't get
>> segmentation fault because of buffer overrun or type mismatch.
>
> You think a language is similar because of the errors it can produce?
> Sorry, you are not serious.

Of course there are other kinds of similarity but the classes of errors
that a language can eliminate is an important characteristic of the
language. As such, it is an area in which any two languages can be
similar or dissimilar. The possibility of sending a program into
completely undefined behavior (including treating user input as
executable code) due to a type error (including subscript errors) is one
of C's worst hazards, so the presence or absence of that same hazard is
a major point of comparison between other languages and C.

You earlier wrote:

> > The domain specific languages are something you can uniquely create in
> > Forth - at least it's unique to have such a powerful concept in back.

I agree with you that C and Java aren't very good for convenient DSL
creation, but Forth certainly isn't unique in making it easy. The
granddaddy of DSL host languages is probably Lisp.

Hugh Aguilar

unread,
Jul 27, 2010, 4:41:39 PM7/27/10
to

Bernd Payson has said that Colorado is close to Texas, but Colorado is
actually closer to Missouri.

In Missouri the people say "Show me," and they don't believe anything
they are told until they've seen it with their own eyes. Apparently
Russia is also close to Missouri. It is certainly a lot closer than
California, where the people believe pretty much anything without
evidence.

Paul E. Bennett

unread,
Jul 27, 2010, 6:03:44 PM7/27/10
to
Hugh Aguilar wrote:

> On Jul 24, 3:19 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
> wrote:
>> John Passaniti wrote:
>> > On Jul 23, 7:34 pm, Hugh Aguilar <hughaguila...@yahoo.com> wrote:
>> >> > Why are you against documentation.
>>
>> >> Documentation after the software is written is a good idea, and the
>> >> sooner the better.
>>
>> I write no documentation after the software is written. It is much too
>> late by then as once the software is written and tested it is out the
>> door and the next project beckons.
>>
>> >> Documentation *before* the software is written does more harm than
>> >> good.
>
> When I was working as a Forth programmer, I was expected to write
> documentation for my software after I wrote it. I'm not talking about
> comments in the source-code, which are done during the writing of the
> program, but of a document describing the program.

That is what I write before coding the software (or building the hardware).
The documentation is a plan for what is to be done and how we prove the
final product works and performs as required.

[%X]

> Most of the time when programmers don't write such a document (or
> purposely writes a useless document), it is not because "the next
> project beckons" --- it is because they want job security by being the
> only person who knows how the program works --- they are hanging on to
> that project forever and avoiding any next project that may beckon.

There are valid reasons for keeping a maintenance programming team for
software that must change to track things like tax law changes. However,
having a maintenance team just because the software is still not right is
unacceptable, but it does go on.

[%X]

> What I said about documentation written *before* the software is
> written, doing more harm than good, was a reference to how some people
> are afraid of writing software and try to avoid doing so by writing
> documentation for vapor-ware instead.

I can understand this view. However, with decent up-front documentation it
is really the plan of what you will produce. Like any plan it has to be
monitored and progress properly to the point that a real product emerges
from the end.

> Programming is somewhat of a
> harsh world to live in, because the programs have to actually *work*.

No point in non-working programmes.

Those members of the workforce who are "just along for the ride" need to be
eradicated from a commercial company's workforce if the management are on
the ball enough to spot them.

Paul Rubin

unread,
Jul 27, 2010, 7:02:28 PM7/27/10
to
Krishna Myneni <krishna...@ccreweb.org> writes:
> We're just discussing the benefits of having the Forth environment. I
> just find it hard to believe that using a debugger can approach the
> level of flexibility of the Forth environment.

I won't say gdb approaches Forth's level of flexibility for defining new
commands while a program is running; I just say that it gives some basic
capabilities that are useful in the situation you initially described
(run the program, poke at a variable, call some function several times
using a gdb macro, etc). You do end up having to repeat an
edit-compile-debug cycle more times, though if the program is small (a
few thousand lines) the edit-compile part is pretty fast with today's
computers.

> With only slight additional effort, the command syntax can be tailored
> to my liking, e.g. having the command arguments be specified in infix
> notation.

I don't think Forth is ahead of Lisp or Haskell in that regard. Of
course C is terrible by comparison with any of them.

> An example is my simple notes database application,
> ftp://ccreweb.org/software/kforth/examples/notes.4th

Thanks for posting this; it's an interesting example to look at, though
I (so far) haven't figured out too much of what it's doing, and I wasn't
able to run it under gforth. I do have to say it looks a lot larger in
SLOC than the equivalent Python program would be.

> Here, your argument is based on making use of existing libraries,..


> Bernd Paysan has demonstrated, web server scripts in Forth are no
> problem to write.

For GUI's that is probably reasonable. Maybe it's ok for web servers
running on a local socket or LAN as a simple way to create a desktop GUI
for a trusted user. For web servers exposed to the public internet, I
have to protest since Forth (like C) has unchecked pointers, which can
lead to arbitrary code injection from malicious input. While no
language can eliminate all program bugs and you have to be careful about
security no matter what language you use, writing secure code in C is
notoriously difficult because of this. Forth seems to be in about the
same situation.

On the issue of libraries, it seems to me that one reason some languages
have lots of libraries is that it's easy to write the libraries in those
languages. So a lack of libraries in language X at least raises the
possibility that X isn't all that great for writing them with.

> I can't refute your expectation that there is a higher development
> time in Forth than in Python or Lisp, for a similar program. Not
> because I believe it, but because such a statement is highly dependent
> on the application.

Sure, there will always be specific applications where any particular
approach stands out, but as a generality it's pretty clear to me that
developing and debugging is (on average, not in every single instance)
much faster in Python than in C. Among the main reasons for this are:
1) strong types and pointers (checked at runtime) 2) garbage collection;
3) convenient support for OOP and/or higher-order functions; 4)
convenient syntax for complex nested data structures. Forth is maybe
ahead of C in area #3 above, but as far as I can tell, at best roughly
equal to C in the other areas.

I like the site www.rubyquiz.com which gives a bunch of programming
exercises intended for Ruby programmers. I don't use Ruby myself but
I've done some of the problems in Haskell as a way to learn Haskell.
I'd say I could do them pretty easily in Python as well, but doing them
in C would take a lot longer. I might attempt one or two of them in
Forth. Or this one, that was intended as a Java exercise (I did
it in Haskell instead):

http://twoguysarguing.wordpress.com/2009/12/09/rock-paper-scissors/

FWIW, I'm much slower with Haskell than with Python but that's probably
because Haskell is a lot different than languages I've used before, so
I'm still getting used to its idioms and methods. My Haskell programs
usually end up roughly the same size as the comparable Python programs.

> Also, I can't claim to have tried to do the same non-trivial program
> in several languages. And, of course, your statement indicates the
> same is true for you.

At least for a low standard of "non-trivial" I've done that informally
quite a few times, though not with Forth. Others have also done such
comparisons more formally:

http://www.cse.iitb.ac.in/~as/fpcourse/jfp.ps
http://norvig.com/java-lisp.html
http://page.mi.fu-berlin.de/prechelt/Biblio/jccpprt_computer2000.pdf
http://page.mi.fu-berlin.de/prechelt/Biblio/jccpp_cacm1999.pdf

The last two are the Prechelt papers referenced by the Norvig article.
The links in Norvig's article don't work. Actually a lot of Prechelt's
stuff looks interesting:

http://page.mi.fu-berlin.de/prechelt/Biblio/

> If you're curious about why people use Forth (are we just old-timer
> holdouts, ignorant of the rest of the world, or are we working with
> something special?),

I'm pretty sure it's a combination of both, and that Forth is similar to
Lisp in this regard. Paul Graham has a famous article I've linked here
before:

http://www.paulgraham.com/avg.html

I like the article a lot and agree with most of its smaller points,
though I think its main point is wrong. The article describes how his
company used Lisp to beat companies that were using C++ and Perl; then
he says there is a hierarchy of languages with increasing power, and
that Lisp is at the top of the pile, so (he doesn't say in so many
words) Lisp programmers are the kings of code. Of course the last part
is wrong: there may be such a hierarchy, but Lisp isn't at the top, and
there probably is no top, just an endless progression most of which
hasn't been invented yet. In set theory terms, Lisp is more like a
"limit ordinal" than a true maximum, and Forth is sort of the same way.

> the only real remedy is to try to apply Forth for your own application
> needs. I don't expect such a question will have a clear cut answer,

Well, I started hanging out here due to an interest in using Forth on an
8-bit micro, an area where I know it can do pretty well. Python simply
can't run on that type of platform, so that comparison is clear-cut ;-).

MarkWills

unread,
Jul 27, 2010, 7:07:32 PM7/27/10
to

Perfect post Hugh. I was going to post something similar, but you
saved me a lot of typing! ;-)

I know I sound like a miserable old cynic, but I have seen a lot of
what Hugh is describing in the Oil & Gas industry.

Regards

Mark

jacko

unread,
Jul 27, 2010, 7:19:09 PM7/27/10
to
On 24 July, 21:02, Elizabeth D Rather <erat...@forth.com> wrote:
> On 7/24/10 9:12 AM, Brad wrote:
> ...
>
> > OTOH, I can think of some reasons Forth will continue to be used in
> > some applications.
>
> > 1. Strategic business advantages matter. There will always be some
> > management willing to stick their neck out to improve the bottom line.
>
> > 2. Resource constrained systems will be around for a long time.
>
> Resource-constrained systems will be around *forever*, for the simple
> reason that however much processors improve in power and speed, and
> memory and other resources become cheaper and more plentiful, the
> ambitions of designers and developers grow at an even faster pace, as
> does the need to be earlier-to-market with lower unit costs.  A language
> and/or methodology that will reliably deliver faster development and
> more efficient use of resources will always have a place.
>
> Cheers,
> Elizabeth
>
> --
> ==================================================
> Elizabeth D. Rather   (US & Canada)   800-55-FORTH
> FORTH Inc.                         +1 310.999.6784
> 5959 West Century Blvd. Suite 700
> Los Angeles, CA 90045http://www.forth.com
>
> "Forth-based products and Services for real-time
> applications since 1973."
> ==================================================

Not to mention feature size weight radiation cross section isses and
works now reliability.

jacko

unread,
Jul 27, 2010, 8:12:31 PM7/27/10
to
> I really wish that when years ago the term "garbage collection" was
> invented...

"Linkage Control" ;-)

The tagging of 'recycler category' does help some. Just as fore-boding
some structure options can in its way.

Hugh Aguilar

unread,
Jul 27, 2010, 9:41:11 PM7/27/10
to
On Jul 27, 5:02 pm, Paul Rubin <no.em...@nospam.invalid> wrote:
> On the issue of libraries, it seems to me that one reason some languages
> have lots of libraries is that it's easy to write the libraries in those
> languages.  So a lack of libraries in language X at least raises the
> possibility that X isn't all that great for writing them with.

I'm getting there with my novice package. :-)

Forth is actually easier for writing general purpose code because the
type-checking doesn't get in the way. I already described this in
regard to how the toucher function for EACH, FIND-NODE, FIND-PRIOR,
can take a variable number of arguments. You described how this could
be "faked up" in C using a void pointer to a struct, but that was just
awful --- the Forth way is much more elegant.

Similarly, I wrote a sort routine recently that works for any record.
This can be done in C using void pointers to structs (again!) for the
comparison function, but it is ugly --- once again, the Forth way is
much more elegant.

> FWIW, I'm much slower with Haskell than with Python but that's probably
> because Haskell is a lot different than languages I've used before, so
> I'm still getting used to its idioms and methods.  My Haskell programs
> usually end up roughly the same size as the comparable Python programs.

> ...


> The article describes how his
> company used Lisp to beat companies that were using C++ and Perl; then
> he says there is a hierarchy of languages with increasing power, and
> that Lisp is at the top of the pile, so (he doesn't say in so many
> words) Lisp programmers are the kings of code.  Of course the last part
> is wrong: there may be such a hierarchy, but Lisp isn't at the top, and
> there probably is no top, just an endless progression most of which
> hasn't been invented yet.  In set theory terms, Lisp is more like a
> "limit ordinal" than a true maximum, and Forth is sort of the same way.

If I were to learn only one functional language, the top of the
hierarchy, which would you recommend:
[___] Haskell
[___] Scala
[___] Erlang
[___] Lisp/Scheme

This would be based upon what you know about my coding style having
seen my novice package.

> Well, I started hanging out here due to an interest in using Forth on an
> 8-bit micro, an area where I know it can do pretty well.  Python simply
> can't run on that type of platform, so that comparison is clear-cut ;-).

What 8-bit micro is that? What is your application? Inquiring minds
want to know. :-)

Hugh Aguilar

unread,
Jul 27, 2010, 9:57:22 PM7/27/10
to
On Jul 27, 4:03 pm, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>

wrote:
> Hugh Aguilar wrote:
> > On Jul 24, 3:19 am, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
> > wrote:
> >> I write no documentation after the software is written. It is much too
> >> late by then as once the software is written and tested it is out the
> >> door and the next project beckons.
>
> >> >> Documentation *before* the software is written does more harm than
> >> >> good.
>
> > When I was working as a Forth programmer, I was expected to write
> > documentation for my software after I wrote it. I'm not talking about
> > comments in the source-code, which are done during the writing of the
> > program, but of a document describing the program.
>
> That is what I write before coding the software (or building the hardware).
> The documentation is a plan for what is to be done and how we prove the
> final product works and performs as required.

My experience in regard to writing MFX, is that THERE IS NO PLAN ---
nobody knew what the MiniForth development system would entail before
it was written. Nobody even knew what the MiniForth would be like
before it was designed. The instruction set was changing on a *daily*
basis when I was writing the assembler and simulator. I had to make
the whole thing table-driven so that I could change the definition of
the processor quickly, which would not have been possible if it were
hard-coded.

Another point is that the boss was an electrical engineer, and very
knowledgeable about the hardware aspects of designing a processor, but
he wasn't much of a programmer. He was offering a lot of advice about
how to write the program (a gigantic CASE statement), and none of that
would have worked. I just ignored him and did what I do, which is
write software. I'm really glad that we weren't doing any
documentation before the fact such as you are describing, or we would
have become locked-in to a lot of bad ideas. That is why I said
earlier that before-the-fact documentation does more harm than good. A
lot of time, writing a program is like wrestling an alligator --- it
is not something that you can plan out step-by-step ahead of time.

> There are valid reasons for keeping a maintenance programming team for
> software that must change to track things like tax law changes. However,
> having a maintenance team just because the software is still not right is
> unacceptable, but it does go on.

I have never seen a program of significant size that didn't need to be
upgraded continually and indefinitely. It isn't that the software is
"still not right," it is just that it evolves to do more and more over
time.

> > Programming is somewhat of a
> > harsh world to live in, because the programs have to actually *work*.
>
> No point in non-working programmes.

This is what I like about programming --- that it has an inherent anti-
bullshit mechanism that doesn't exist in other fields of endevour.
(excuse my language above, but that was the only way to put it)

Paul Rubin

unread,
Jul 27, 2010, 10:36:47 PM7/27/10
to
Hugh Aguilar <hughag...@yahoo.com> writes:
>> On the issue of libraries,
> I'm getting there with my novice package. :-)

No offense intended but I think you have a very long way to go before
approaching the level of stuff available for Python, Java, or whatever.

> Forth is actually easier for writing general purpose code because the
> type-checking doesn't get in the way.

Type checking exists for good reasons. It's not worth discussing unless
you've at least read the following:

http://web.archive.org/web/20080822101209/http://www.pphsg.org/cdsmith/types.html

> If I were to learn only one functional language,

"Only one" is not IMO a wise goal since they all have different
strengths. From your list, Haskell or Scala if you want to experience
the joys of precise static type systems. You're not allowed to
criticize them until you've done this ;-) If you want typeless, then
Scheme or maybe Erlang. Ruby or Python if you want something pragmatic
and easy to learn, though not as "functional". IMO, every programmer
should have some exposure to Lisp because of the ideas it contains, but
it is pretty old-fashioned these days as a medium for actual software.
ML should also be on your list, in the static-type camp like Haskell,
but less "pure" and maybe more practical.

I like Tim Sweeney's presentation that I've mentioned before, about
the problems with most present languages, and what future ones should do:

http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf

> What 8-bit micro is that? What is your application? Inquiring minds
> want to know. :-)

Just fooling around with AVR stuff because of the Arduino ecosystem and
knowing people using them. I also have some interest in the Green Array
Forth chips if and when they become available.

Bernd Paysan

unread,
Jul 28, 2010, 5:37:24 AM7/28/10
to
Krishna Myneni wrote:

> However, several
> Forth systems (the commercial systems, and gforth, bigForth, kForth,
> to name a few free ones) do provide methods to import external library
> code. The bindings do need to be written though, and because there is
> no uniformity, the resulting library interface code is not portable.

We are in a better situation than you describe. The library interface code
itself is not portable, but the code *using* the interface code is. And
that's usually the majority of the work.

Paul E. Bennett

unread,
Jul 28, 2010, 6:28:31 AM7/28/10
to
Hugh Aguilar wrote:

> On Jul 27, 4:03 pm, "Paul E. Bennett" <Paul_E.Benn...@topmail.co.uk>
> wrote:

[%X]

>> That is what I write before coding the software (or building the
>> hardware). The documentation is a plan for what is to be done and how we
>> prove the final product works and performs as required.
>
> My experience in regard to writing MFX, is that THERE IS NO PLAN ---
> nobody knew what the MiniForth development system would entail before
> it was written. Nobody even knew what the MiniForth would be like

The documentation writing is a very hierarchical process. It starts with the
specification parts then works down through the design portion (which may
utilise prototyping to get a handle on the best strategy). At some point the
design provides some clear idea of what words are required in the
application. The glossary texts for these are written and the design
reviewed to ensure compliance with the requirements (cross-referenced from
the specification). On passing the review code is written, inspected, and
tested. The design gives quite a lot detail of the overall structure, what
sort of functions are required. Remember that, through the analysis I will
already have some notion of the programming surfaces I will be dealing with
(clear-interface lexicons) and I can begin coding upwards from those
surfaces.

Yes, it takes a bit more time doing such documentation but that time is
saved in the reduction of test and debug effort you need to put in later
(when finding the solution to knotty problems can be very expensive).



> Another point is that the boss was an electrical engineer, and very
> knowledgeable about the hardware aspects of designing a processor, but
> he wasn't much of a programmer. He was offering a lot of advice about
> how to write the program (a gigantic CASE statement), and none of that
> would have worked. I just ignored him and did what I do, which is
> write software. I'm really glad that we weren't doing any
> documentation before the fact such as you are describing, or we would
> have become locked-in to a lot of bad ideas. That is why I said
> earlier that before-the-fact documentation does more harm than good. A
> lot of time, writing a program is like wrestling an alligator --- it
> is not something that you can plan out step-by-step ahead of time.

I think we will just have to agree that we disagree on that point. There are
methods and approaches to documentation that assist in breaking the problems
and complexity down to easily manageable functions. Otherwise you are
shipping a prototype.



>> There are valid reasons for keeping a maintenance programming team for
>> software that must change to track things like tax law changes. However,
>> having a maintenance team just because the software is still not right is
>> unacceptable, but it does go on.
>
> I have never seen a program of significant size that didn't need to be
> upgraded continually and indefinitely. It isn't that the software is
> "still not right," it is just that it evolves to do more and more over
> time.

Just a point of view of what constitutes a product. I do products that have
to stand the test of time. Most of my systems are required to operate for a
minimum of 25 years with little maintenance. They also have to continue to
provide safe operation over that time. When it comes time for a new product
to replace it it will be a new design from the ground up because the
technology will have moved on so much. My product fields are mostly in
energy systems (gas/oil/petrochemical/coal/nuclear) and transportation
systems (road and rail). I have also done banking, medical and food
packaging industries too.



>> > Programming is somewhat of a
>> > harsh world to live in, because the programs have to actually *work*.
>>
>> No point in non-working programmes.
>
> This is what I like about programming --- that it has an inherent anti-
> bullshit mechanism that doesn't exist in other fields of endevour.
> (excuse my language above, but that was the only way to put it)

Excused but I think that needs more clarification of why you think that.

Krishna Myneni

unread,
Jul 28, 2010, 8:09:24 AM7/28/10
to
On Jul 28, 4:37 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Krishna Myneni wrote:
> > However, several
> > Forth systems (the commercial systems, and gforth, bigForth, kForth,
> > to name a few free ones) do provide methods to import external library
> > code. The bindings do need to be written though, and because there is
> > no uniformity, the resulting library interface code is not portable.
>
> We are in a better situation than you describe.  The library interface code
> itself is not portable, but the code *using* the interface code is.  And
> that's usually the majority of the work.
>

As long as we use the same word names, the same order of arguments,
and the same returns on the stack for the interface code, then end
applications using the library interface should be portable. However,
there is no guarantee of uniformity in the interface code for
different systems. For example, I used bigForth's X11 interface code
as a go by when making the equivalent interface for kForth. However,
for reasons I no longer recall, I made the argument order different
for some of the X11 interface words -- I think the reason was to keep
the arg list consistent with the C functions. X11 is not a typical
case, however, since it's such a huge library and there are many
associated data structures and constants.


Krishna

Bernd Paysan

unread,
Jul 28, 2010, 8:53:17 AM7/28/10
to
Krishna Myneni wrote:
> As long as we use the same word names, the same order of arguments,
> and the same returns on the stack for the interface code, then end
> applications using the library interface should be portable. However,
> there is no guarantee of uniformity in the interface code for
> different systems. For example, I used bigForth's X11 interface code
> as a go by when making the equivalent interface for kForth. However,
> for reasons I no longer recall, I made the argument order different
> for some of the X11 interface words -- I think the reason was to keep
> the arg list consistent with the C functions.

In the meantime, this has changed in bigForth, as well. The Xlib interface
was one of the first C interfaces in bigForth, and back then, I had reverse-
argument order bindings only. Later bindings, like OpenGL, were already in
the C order from start, and when I ported MINOS to VFX, I took the
opportunity to change the argument order in the Xlib bindings - while
Stephen took the opportunity to provide floating point arguments on the
Forth FP stack. The result is that the user side of both interface systems
are compatible now.

Helmar

unread,
Jul 28, 2010, 10:05:43 AM7/28/10
to
On 27 Jul., 21:35, Paul Rubin <no.em...@nospam.invalid> wrote:
> Helmar <hel...@gmail.com> writes:
> >> Sorry? What do you call "C-like"? Java isn't C-like, you don't get
> >> segmentation fault because of buffer overrun or type mismatch.
>
> > You think a language is similar because of the errors it can produce?
> > Sorry, you are not serious.
>
> Of course there are other kinds of similarity but the classes of errors
> that a language can eliminate is an important characteristic of the
> language.

You are missing the point here. Indeed the *possibility* to eliminate
some errors is something to distinguish languages. But this is a weak
criteria, since this also depends on research how to do so. The
implementation of the language has to ensure the current research is
really implemented. Take a look at C-compilers and the different level
of support for the developer that evolved over the years.

If you enhance the idea of "languages against errors" you come to
strategies of "lazy interpretation" - which means the implementation
tries to interpret what could have been meant with a specific
expression. The only one thing I know that had such ideas and
implemented it, is Perl - and well, Perl <=5 has no other
implementations than the one that exists. So better to say that Perl
up to version 5 was a tool and no language.

>  As such, it is an area in which any two languages can be
> similar or dissimilar.  The possibility of sending a program into
> completely undefined behavior (including treating user input as
> executable code) due to a type error (including subscript errors) is one
> of C's worst hazards, so the presence or absence of that same hazard is
> a major point of comparison between other languages and C.

Forth can do same as C. So Forth is as powerful as C. But hell - we
have to take a look at the implementation...

> You earlier wrote:
> > > The domain specific languages are something you can uniquely create in
> > > Forth - at least it's unique to have such a powerful concept in back.
>
> I agree with you that C and Java aren't very good for convenient DSL
> creation, but Forth certainly isn't unique in making it easy.  The
> granddaddy of DSL host languages is probably Lisp.

I'm not the historian for computer languages. Afaik Lisp is not really
able to break it's basic structures. Forth can do - simply hack into
the text interpreter.

Regards,
-Helmar

Albert van der Horst

unread,
Jul 28, 2010, 10:36:47 AM7/28/10
to
In article <dab8852a-b9d8-48d1...@w30g2000vbs.googlegroups.com>,

Case in point: arguments to interactive commands in a shell.
Arguments for commands are sometimes like small languages.
Forth is nice for that.

: doit 1 ARG[] DROP C@ &- = ( option) IF 1 ARG[] EVALUATE THEN
.... ;

and then we have the options:
: -d SHIFT-ARGS cidis ;
: -a SHIFT-ARGS cias ;
: -l 2 ARG[] EVALUATE length ! SHIFT-ARGS SHIFT-ARGS ;
...

>
>Regards,
>-Helmar


--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Helmar

unread,
Jul 28, 2010, 10:34:39 AM7/28/10
to

Get friends ;)

> Or is it only a theoretical observation?

No, I did it years ago. I think I opensourced it even. Well, I'm
unsure if I opensourced it, because the project used closed source
databases and without it the programs where not really useful. But
well, the libraries are exist. And they are still working and in they
are in use. Converting XML to internal representations. Nothing
special about that. And why should Forth not be able to do so at your
opinion?

-Helmar

Stephen Pelc

unread,
Jul 28, 2010, 10:47:51 AM7/28/10
to
On Tue, 27 Jul 2010 23:34:19 +0400, Aleksej Saushev <as...@inbox.ru>
wrote:

>Can you bring any XML parser in Forth _right_now_?
>Or is it only a theoretical observation?

An XML framework has been supported for several years in VFX Forth.
It's derived from work by Jenny Brien and is used with MPE's SOAP
server.

Stephen


--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

jacko

unread,
Jul 28, 2010, 11:01:13 AM7/28/10
to
> Type checking exists for good reasons.  It's not worth discussing unless
> you've at least read the following:
>
> http://web.archive.org/web/20080822101209/http://www.pphsg.org/cdsmit...

Composite typing.

Assign each list derived type a prime 'galois' (faster division?)
number. Any reference in a list type to another list type multiplies
the primary list type by the referenced type, implying a composite
list type. All factor square free types are not self refering. What's
the fastest mobious or galois mobious function calculation speed, for
making a better collector?

Is the maximum operator number say for argument order irrelevant
language. => tag casting?

Any use?

Doug Hoffman

unread,
Jul 28, 2010, 12:42:50 PM7/28/10
to
On 7/27/10 7:02 PM, Paul Rubin wrote:

> I like the site www.rubyquiz.com which gives a bunch of programming
> exercises intended for Ruby programmers. I don't use Ruby myself but
> I've done some of the problems in Haskell as a way to learn Haskell.
> I'd say I could do them pretty easily in Python as well, but doing them
> in C would take a lot longer. I might attempt one or two of them in
> Forth. Or this one, that was intended as a Java exercise (I did
> it in Haskell instead):
>
> http://twoguysarguing.wordpress.com/2009/12/09/rock-paper-scissors/


Thanks for the "crossword puzzle" of the day.
Here's one way to Forth it:

\ maximum namelength is 32 chars
create name1 33 allot
create name2 33 allot

\ we do not filter for invalid input,
\ but do accept upper or lower case
: getPlay ( -- char )
cr ." [R]ock, [P]aper, or [S]cissors? " key dup emit
32 or ; \ convert char to lowercase

: getName ( addr -- ) \ addr will be name1 or name2
dup 1+ 32 accept swap c! ;

: namePrompt ( n -- )
cr ." Player " . ." Name: " ;

: getNames
1 namePrompt name1 getName
2 namePrompt name2 getName ;

: compareInput { p1 p2 -- f | n t } \ true if we have a 1round winner
\ n=0 => player1 wins
\ n=1 => player2 wins
p1 p2 = IF cr false exit THEN
p1 CASE
[char] r OF p2 [char] p = ENDOF
[char] p OF p2 [char] s = ENDOF
[char] s OF p2 [char] r = ENDOF
ENDCASE true ;

: printWinner ( 0 or 1 -- )
cr IF name2 ELSE name1 THEN
count type ." Wins!" ;

: 1round ( -- roundWinner ) \ 0=player1, 1=player2
BEGIN
getPlay getPlay compareInput
UNTIL ;

: play1 \ play just one round
getNames
1round
printWinner ;


\ Add code for "First to X"

0 value score1
0 value score2
0 value scoreWin

: gameDone { n -- false | n true }
n IF 1 +to score2 score2 ELSE 1 +to score1 score1 THEN
scoreWin = IF cr ." First to " scoreWin . ." is " n true
ELSE cr false
THEN ;

: init ( n -- )
to scoreWin 0 to score1 0 to score2
getNames ;

: playto ( n -- ) \ play first to n wins
init
BEGIN
1round gameDone
UNTIL
printWinner ;

“Best of X” and “To X, win by Y” would not be hard to add.

-Doug

Doug Hoffman

unread,
Jul 28, 2010, 12:47:34 PM7/28/10
to
Sorry, here is a version that does not use +to or the new locals and so
should be ANS compatible:

-Doug

\ maximum namelength is 32 chars
create name1 33 allot
create name2 33 allot

\ we do not filter for invalid input,
\ but do accept upper or lower case
: getPlay ( -- char )
cr ." [R]ock, [P]aper, or [S]cissors? " key dup emit
32 or ; \ convert char to lowercase

: getName ( addr -- ) \ addr will be name1 or name2
dup 1+ 32 accept swap c! ;

: namePrompt ( n -- )
cr ." Player " . ." Name: " ;

: getNames
1 namePrompt name1 getName
2 namePrompt name2 getName ;

: compareInput \ { p1 p2 -- f | n t } \ true if we have a 1round winner

locals| p2 p1 | \ n=0 => player1 wins n=1 => player2 wins
p1 p2 = IF cr ." Tie Round" cr false exit THEN


p1 CASE
[char] r OF p2 [char] p = ENDOF
[char] p OF p2 [char] s = ENDOF
[char] s OF p2 [char] r = ENDOF
ENDCASE true ;

: printWinner ( 0 or 1 -- )
cr IF name2 ELSE name1 THEN
count type ." Wins!" ;

: 1round ( -- roundWinner ) \ 0=player1, 1=player2
BEGIN
getPlay getPlay compareInput
UNTIL ;

: play1 \ play just one round
getNames
1round
printWinner ;

\ Add code for "First to X"

variable score1
variable score2
variable scoreWin

: gameDone \ { n -- false | n true }

locals| n |
n IF 1 score2 +! score2 @ ELSE 1 score1 +! score1 @ THEN
scoreWin @ = IF cr ." First to " scoreWin @ . ." is " n true
ELSE cr false
THEN ;

: init ( n -- )

scoreWin ! 0 score1 ! 0 score2 ! getNames ;

Hugh Aguilar

unread,
Jul 28, 2010, 6:53:40 PM7/28/10
to
On Jul 28, 8:34 am, Helmar <hel...@gmail.com> wrote:
> But
> well, the libraries are exist. And they are still working and in they
> are in use. Converting XML to internal representations. Nothing
> special about that. And why should Forth not be able to do so at your
> opinion?

I don't think he was saying that Forth can't work with XML, just that
there aren't any public-domain libraries available for doing it. Forth
can do anything theoretically. As I was pointing out earlier though,
there is gap between theory and implementation, and a lot of people
never jump that gap.

This is the same point that Paul Rubin is making below:

On Jul 27, 8:36 pm, Paul Rubin <no.em...@nospam.invalid> wrote:


> Hugh Aguilar <hughaguila...@yahoo.com> writes:
> >> On the issue of libraries,
> > I'm getting there with my novice package. :-)
>
> No offense intended but I think you have a very long way to go before
> approaching the level of stuff available for Python, Java, or whatever.

Well, a journey of a thousand miles begins with a single step.

I've said before that I don't use OPC (other people's code) and, for
the most part, I don't. Under certain circumstances though, I might be
persuaded to include OPC in my novice package. For example, I don't
know very much about XML, so I'm not a good candidate for writing that
myself.

> > If I were to learn only one functional language,
>
> "Only one" is not IMO a wise goal since they all have different
> strengths.  From your list, Haskell or Scala if you want to experience
> the joys of precise static type systems.  You're not allowed to
> criticize them until you've done this ;-)

But I still get to criticize C and C++, right? That's pretty much a
given for Forth programmers! :-D

Seriously, I appreciate all the links you have provided to interesting
articles on the web. I'm basically a hobbyist with no education beyond
high school. I've been criticized for "wildly spewing code," but that
is essentially what I do, and I like it. I do have vague aspirations
of someday learning something though.

I had trouble learning Factor because I didn't understand dynamic OOP,
and so I resolved to learn Lisp because that is what Factor is derived
from and there are beau-coup books available on Lisp that I could
learn from. I am also interested in Erlang because I find the parallel
processing idea to be pretty amazing. I would like to someday
implement something like that for micro-controllers. There is also LFE
(Lisp Flavored Erlang) that allows writing Lisp for the Erlang VM, so
anything that I learn about Lisp could be carried over to Erlang.

Haskell might be too high-brow for me. Scala looks interesting, but
I'm hesitant to delve into it because it is dependent upon the Java
ecosystem, which is pretty big. There is an entire section in the
bookstore describing Java packages. I can't imagine ever learning all
of that.

> > What 8-bit micro is that? What is your application? Inquiring minds
> > want to know. :-)
>
> Just fooling around with AVR stuff because of the Arduino ecosystem and
> knowing people using them.  I also have some interest in the Green Array
> Forth chips if and when they become available.

The AVR8 isn't a very good target for Forth because it doesn't support
writing to program memory. You can't have an on-board Forth system.
You have to develop your application using a cross-compiler. This
kills the whole interactive-development advantage of Forth; you might
as well be using GCC.

Even though I wrote a cross-compiler, I don't really recommend using a
cross-compiler for application development (although that is what we
did at Testra). I recommend using the cross-compiler to develop an on-
board Forth system, and then using that for application development.
You may still need to do assembly programming with the cross-compiler,
but most of your development will be done interactively.

I would really recommend using one of the newer chips that support
writing to program memory, and also have plenty of 16-bit registers.
The AVR8 only has three (X, Y and Z), which isn't much. If you really
want to use the AVR8 because of the Arduino ecosystem, you could write
an on-board Forth system that uses a threaded scheme of some kind, so
the Forth code can be in data memory. It might be possible to develop
your Forth code this way, and then rebuild the entire program using
the cross-compiler in order to compile your Forth code into machine-
language for speed.

Are you writing your own Forth system, or using OPC?

Paul Rubin

unread,
Jul 28, 2010, 11:34:55 PM7/28/10
to
Hugh Aguilar <hughag...@yahoo.com> writes:
> I've said before that I don't use OPC (other people's code)

It's no longer really possible to be an effective programmer and operate
that way, unfortunately. MIT ditched its intro CS course that was based
on Scheme and that taught fundamental algorithms etc., with one based on
Python where you get to program a robot using documentation they give
you that is incomplete and has errors on purpose, and complex libraries
that are full of bugs. That is supposed to more closely model the real
world.

> I still get to criticize C and C++, right? That's pretty much a
> given for Forth programmers! :-D

Yes of course. It is every programmer's duty to criticize C and C++
every chance they get ;-).

> Seriously, I appreciate all the links you have provided to interesting
> articles on the web. I'm basically a hobbyist with no education
> beyond high school.

I finished college a while back, but most of what I know about modern
programming languages is from web pages, Wikipedia, and the online
Haskell community. There is a stupendous amount of material out there.

> I am also interested in Erlang because I find the parallel
> processing idea to be pretty amazing. I would like to someday
> implement something like that for micro-controllers.

I'm not sure but I think Hedgehog Lisp might have sort of an Erlang
flavor for micros (32-bit though). It implements concurrency with state
machines that could possibly be seen as a low rent version of Erlang
processes. I'd just install Erlang and mess with it though. The Erlang
IRC channel is pretty friendly if you have questions. I've looked at
Joe Armstrong (Erlang designer)'s book about Erlang but have not yet
written any Erlang code. It's on my list.

Your points about Haskell and Scala are valid. Haskell is mostly a
research testbed and getting to understand it (IMO) requires absorbing
some mathematical logic culture, which is interesting in its own right
but it's one of the reasons learning Haskell has been a slow process for
me. ML might be more accessible:

http://www.cs.cmu.edu/~rwh/introsml/
http://www.ocaml-tutorial.org/
http://min-caml.sourceforge.net/index-e.html

I chose to pursue Haskell rather than ML or Erlang as my first serious
FPL on the theory that if I was going to burn personal time and energy
on pointy-headed languages, I might as well start with the one closest
to the bleeding edge, after which the others would be easy. But it's
certainly required a lot of head scratching.

> The AVR8 isn't a very good target for Forth because it doesn't support

> writing to program memory... This kills the whole
> interactive-development advantage of Forth

That's an interesting point. I remember Elizabeth saying that the
Swift system can reload some flash processors on the fly, if that
helps. But I think I'd just write the code on my laptop with
some simulated target hardware, so only the lowest level stuff
would have to be tested much on the target itself.

> Are you writing your own Forth system, or using OPC?

I've been fooling with gforth but I guess DIY is more in the proper
spirit. It might be interesting to implement Forth in Haskell.

Paul Rubin

unread,
Jul 28, 2010, 11:36:25 PM7/28/10
to
Doug Hoffman <glid...@gmail.com> writes:
> Sorry, here is a version that does not use +to or the new locals and
> so should be ANS compatible: ...

Thanks. That runs nicely under gforth. I may try modifying it to add
the missing features.

Krishna Myneni

unread,
Jul 28, 2010, 11:37:59 PM7/28/10
to
On Jul 27, 6:02 pm, Paul Rubin <no.em...@nospam.invalid> wrote:
> Krishna Myneni <krishna.myn...@ccreweb.org> writes:

> >... An example is my simple notes database application,


> >ftp://ccreweb.org/software/kforth/examples/notes.4th
>
> Thanks for posting this; it's an interesting example to look at, though
> I (so far) haven't figured out too much of what it's doing, and I wasn't
> able to run it under gforth.  I do have to say it looks a lot larger in
> SLOC than the equivalent Python program would be.


Hi. Thanks for trying it under Gforth. I haven't tested the code under
another ANS Forth for quite awhile, and I found that it needed
updating. I have revised the code, and I will discuss the revision in
another post, since it relates to the recently proposed memory access
word set. For now, you may find a standard version which works with
Gforth at

ftp://ccreweb.org/software/gforth/

You will need the following files:

notes.fs
strings.fs
user.fs
struct-ext.fs


> I'm pretty sure it's a combination of both, and that Forth is similar to
> Lisp in this regard.  Paul Graham has a famous article I've linked here
> before:
>
>    http://www.paulgraham.com/avg.html
>

I read this article many years ago, and have been interested in Lisp
for sometime now.


Cheers,
Krishna

It is loading more messages.
0 new messages