Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Benchmarks and C/Unix]

8 views
Skip to first unread message

Stephen J. Williams

unread,
Apr 22, 1986, 11:37:10 PM4/22/86
to
In article <21...@peora.UUCP> j...@peora.UUCP (J. Eric Roskos) writes:
>> Here I will agree with you. In particular, languages that make the
>> machine appear to have a uniform address space (C is a good example)
>> require kludges for performance on machines without that (80286).
>
>This brings up an interesting topic that I was just thinking about last
>night. Viz., how might one provide better architectural support for C?

Why would one want to? Wasn't C sort of written for PDP-11s?

--Scal

Rex Ballard

unread,
May 5, 1986, 7:47:53 PM5/5/86
to
Starting backwards. Yes, C was written for PDP-11s, as sort of a GENERIC
ASSEMBLER. When reduced to it's simplest constructs, that is exactly
what C is. However, by allowing constructs to be combined in sometimes
perverse ways, it looks like any other Algol type language. Of course,
to broaden the appeal of C/Unix, additional "features" and variations
were added to include machines with oddities like 9 bit chars, and 24 bit ints.

Why would someone want to provide architectural support? Because it is
a "generic" language. FORTH and Pascal, to a lesser extent are also
"generic" in that "inner interpreters" can be used to run intermediate
binaries, but C has traditionally been used to go directly to native form
(although there may be intermediate stages).

If you only want to support ONE processor, you could build a super LISP,
Prolog, FORTH, Pascal, or Modula 2 engine. But if you want to support
a LINE of machines, it makes sense to support a high level language
that can support several machines. It also makes sense to choose a
language where OEMs, VARs, and End Users won't have to re-invent the
wheel to get simple functionality as they migrate up the line.

How could one provide better support for C? CCI's 6/32 does just this,
so do a few other machines. There are lots of things you can do, like
building fast "frame" call archetectures, pointer arithmetic, and
stacks, pushing only modified registers and restoring them, using
FIFO's or separate CACHES for pointer registers and stacks. Even using
separate stacks for parameters and return/register values. Most of
these would improve any general purpose machine even if some other
language replaced it.

Could C, or the typical practices used in C be improved? Absolutely!
If you have a machine where the overhead of a C subroutine is sufficiently
low, you could make more effective use of complete, structured designs,
reducing the number of conditionals in a module to 2 or three, rather
than stringing several "units" together into one large subroutine.
This creates a need for better naming conventions, better organization
of subroutines (such as those used in SmallTalk), even data bases to
manage code, documentation, and source. The "code,link,test,debug"
cycle could be sped up with an interactive, incremental
editor/compiler/debugger. Such things are available, especially on
PC's, but still slightly "under developed" for general use.
Perhaps either SmallTalk could be enhanced to give more direct control
over the system, or C could be enhanced to include some of the features
of SmallTalk. (All of this is happening, but the direction needs to
be pointed out). Currently, C lacks a productive "environment" for
the programmer, that can be run on all C capable machines. I know
of none that will allow me to unit test a single module in an 8meg
project without some sort of "cut and paste".

As to the superiority of UNIX over other operating systems, there are
few who think UNIX couldn't be improved. The big question is how?
I'm sure we will be seeing a rapid evolution of UNIX and UNIX-like
operating systems as multi-tasking micros and multi-processing minis
become mandatory state of the art, rather than expensive luxuries.
Hopefully, a few of them won't be "designed by commitee", Unix
started out right (a small group trying good ideas), but evolved
into a slow memory pig.

It would be nice to see Unix modularized, so that the whole kernal
doesn't have to be re-linked just to add two lines of code to a
driver. It would be nice if the queuing and signalling as well as
the context switches could be cleaned up. It would be nice to have
generic "transaction" mechanisms similar to pipes, so that work could
be shared between processes. It would be nice to have locks, so that
processes that wish to recieve input from several processors could do
so in real time. It would be nice if "system libraries" were "sharable"
so that less copies of "printf" were taking up swap space. It would
be nice if all but the bare bones drivers could be "tasks" rather than
part of the kernal, so that only that which was needed at the moment
would sit in core. The list goes on, but most has been hashed to death
already.

If we wish to come up with better products, we have to look at both the
best and the worst in the best and worst of systems and languages,
operating systems, and archetictures. I haven't seen a system yet
that is so good that it can't be improved, or a system so bad that
there wasn't at least one or two good features in it.

Some of the most valuable treasures can be found in some of the most
obsure places. Things like the 1802/CHIP-8, and other "long forgotten"
systems, some of which are still used, contain a veritable diamond mine
of ideas, many of which fell into obscurity simply because the creators
didn't have the "clout" of their competitors.

Dan Ts'o

unread,
May 6, 1986, 6:54:11 PM5/6/86
to
> Some years ago when I was learning 6800 assembler (anybody remember
> D2 kits?) I used to first write everything in C and then hand compile it
> into M6800 asm. When I told my professor (are you reading this Professor
> Efe?) that I did this instead of drawing flow-charts, he laughed at me, but
> as long as you keep your code simple, the conversion is trivial and can be
> done in your head as fast as you can write down the asm code. Perhaps the
> simplicity of the M6800 (dare I call it a RISC machine? :-)) makes this
> easier than for something like a vax, but I can still do it; let's see:
>
> char i, j;
> j = 0;
> for (i = 0; i <= 10; i++)
> j = j + i;
>
> clr j
> loop1: clr i
> cmp i, #10
> bge out
> lda j
> adda i
> sta j
> lda i
> inca
> sta i
> bra loop1:
> out:

Not too good, I'm afraid. The loop1 label is misplaced - you "clr i"
on every iteration. Also #10 is octal 8 instead of 10. You probably don't want
the colon at the end of the "bra" statement. You don't want "bge out" but
"bgt out" (since you say i <= 10). And finally, i and j should be local
variables on the stack. I can see why your professor laughed.

Roy Smith

unread,
May 6, 1986, 6:54:11 PM5/6/86
to
In article <3...@ccird1.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
> [...] C was written for PDP-11s, as sort of a GENERIC ASSEMBLER. When

> reduced to it's simplest constructs, that is exactly what C is.

Some years ago when I was learning 6800 assembler (anybody remember


D2 kits?) I used to first write everything in C and then hand compile it
into M6800 asm. When I told my professor (are you reading this Professor
Efe?) that I did this instead of drawing flow-charts, he laughed at me, but
as long as you keep your code simple, the conversion is trivial and can be
done in your head as fast as you can write down the asm code. Perhaps the
simplicity of the M6800 (dare I call it a RISC machine? :-)) makes this
easier than for something like a vax, but I can still do it; let's see:

char i, j;
j = 0;
for (i = 0; i <= 10; i++)
j = j + i;

clr j
loop1: clr i
cmp i, #10
bge out
lda j
adda i
sta j
lda i
inca
sta i
bra loop1:
out:

--
Roy Smith, {allegra,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

Bob Larson

unread,
May 7, 1986, 9:17:31 AM5/7/86
to
In article <3...@ccird1.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
>As to the superiority of UNIX over other operating systems, there are
>few who think UNIX couldn't be improved. The big question is how?
>I'm sure we will be seeing a rapid evolution of UNIX and UNIX-like
>operating systems as multi-tasking micros and multi-processing minis
>become mandatory state of the art, rather than expensive luxuries.
>Hopefully, a few of them won't be "designed by commitee", Unix
>started out right (a small group trying good ideas), but evolved
>into a slow memory pig.
Which leads to the UNIX definition in the os9/68k users manual:
"An operating system similar to os-9, but with less functionality
and special features designed to soak up excess memory, disk space
and CPU time on large, expensive computers."

>It would be nice to see Unix modularized, so that the whole kernal
>doesn't have to be re-linked just to add two lines of code to a
>driver.

As in os9.

>It would be nice if the queuing and signalling as well as
>the context switches could be cleaned up.

>It would be nice to have
>generic "transaction" mechanisms similar to pipes, so that work could
>be shared between processes.

Are os9/68k's named pipes what you are looking for?

>It would be nice to have locks, so that
>processes that wish to recieve input from several processors could do
>so in real time.

>It would be nice if "system libraries" were "sharable"
>so that less copies of "printf" were taking up swap space.

As in os9/68k, but I prefer how it is done in primos 19.4 and beond.
(Primos uses some hardware support: fault bit on pointers.)

>It would
>be nice if all but the bare bones drivers could be "tasks" rather than
>part of the kernal, so that only that which was needed at the moment
>would sit in core.

I think os9/68k has what you are asking for here.

>The list goes on, but most has been hashed to death already.

And much of it is unique to unix. Not all "Unix-like" operating systems
are bug-for-bug compatable.

>If we wish to come up with better products, we have to look at both the
>best and the worst in the best and worst of systems and languages,
>operating systems, and archetictures. I haven't seen a system yet
>that is so good that it can't be improved, or a system so bad that
>there wasn't at least one or two good features in it.

I agree.

--
Bob Larson
Arpa: Bla...@Usc-Ecl.Arpa
Uucp: ihnp4!sdcrdcf!usc-oberon!blarson

hpislx.UUCP

unread,
May 8, 1986, 2:14:49 PM5/8/86
to

Landon Dyer

unread,
May 8, 1986, 2:14:49 PM5/8/86
to
In article <4...@rna.UUCP>, d...@rna.UUCP (Dan Ts'o) writes:
> > Some years ago when I was learning 6800 assembler (anybody remember
> > D2 kits?) I used to first write everything in C and then hand compile it
> > into M6800 asm.

I used exactly the same technique writing video game cartridges for
the "old" Atari --- write a module in a kind of psuedo-C, then hand
compile it into 6502 assembly. The trick was that each function had
three local variables called A, X and Y; it really WAS high level
assembly language. My roommate called me "the world's best optimizing
C compiler for the 6502." It worked well.

You can laugh, but I made the company millions of dollars this way.
(It's not MY fault the old Atari blew it --- the engineers were making
money but the marketing types out-numbered us three to one! Grrr.)

----------------

Here's a perverse thought: Has anyone done any research on
architechures to help people writing /assembly language/? (Maybe the
PDP-11, VAX or IBM-370 architechures are optimal, or maybe no one has
ever considered making life easier for those who spend their lives
coding "down unda.")
--

Landon Dyer "If Business is War, then
Atari Corp. I'm a Prisoner of Business!"
... {hoptoad,lll-crg!vecpyr}!atari!dyer "Quantity is Quality!"

Carl S. Gutekunst

unread,
May 8, 1986, 3:39:18 PM5/8/86
to
In article <23...@phri.UUCP> r...@phri.UUCP (Roy Smith) writes:
>... I used to first write everything in C and then hand compile it
>into M6800 asm....

>as long as you keep your code simple, the conversion is trivial and can be
>done in your head as fast as you can write down the asm code. Perhaps the
>simplicity of the M6800 makes this easier than for something like a vax....

Interestingly enough, this is exactly the way Roy Harrington wrote all of Z80
Cromix. The entire kernel was written first in C, then hand compiled into Z80
assembler. It's the only assembler program of that magnitude (40K of code)
that I've ever worked with that was easy and pleasant to maintain. (Roy's C
coding was very clean as well, which helped.)

The penalty was that many of the C constructs needed for a Unix-like OS did
not map well into the Z80's instruction set, or required use of the highly
inefficient IX and IY instructions. Hence the OS was both bigger and slower
than it could have been. On the other hand, the tty driver was written using
more "classical" coding style, with all the usual sorts of "clever" tricks you
can do in assembler. It was slower than frozen mud, buggy, and impossible to
maintain.

<csg>

S Radtke

unread,
May 9, 1986, 1:49:54 PM5/9/86
to

> Some years ago when I was learning 6800 assembler (anybody remember
>D2 kits?) I used to first write everything in C and then hand compile it
>into M6800 asm. When I told my professor (are you reading this Professor
>Efe?) that I did this instead of drawing flow-charts, he laughed at me, but
>as long as you keep your code simple, the conversion is trivial and can be
>done in your head as fast as you can write down the asm code. Perhaps the
>...

I just read an article from IEEE Transactions on Software Engineering
Feb.,1986 by Peter Henderson, "Functional Programming, Formal Specification,
and Rapid Prototyping". Except for the hand compilation,
which could be avoided, you were doing a similar thing- using a high level
language as a formal specification for a lower level language. Your code
was an executable specification and allowed rapid prototyping to influence
the design decisions at an early point.

Steve Radtke
Bell Communications Research
Piscataway, NJ

Frank Adams

unread,
May 9, 1986, 4:20:55 PM5/9/86
to
In article <23...@phri.UUCP> r...@phri.UUCP (Roy Smith) writes:
> Some years ago when I was learning 6800 assembler (anybody remember
>D2 kits?) I used to first write everything in C and then hand compile it
>into M6800 asm. When I told my professor (are you reading this Professor
>Efe?) that I did this instead of drawing flow-charts, he laughed at me,

Well, he shouldn't have. Writing "pseudo-code" first (it needn't be
compilable or even in a well-defined language) is a standard design
technique, both for assembly code and for code in higher level languages.
In my experience, it is a lot more common than flow-chart writing. Flow
charts are appropriate only for the rare program with an inherently very
complex flow of control. In other words, if you find you need to write a
flow chart to get the logic right, you should first try to go back and
redesign the algorithm, or modularize it better.

Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Multimate International 52 Oakland Ave North E. Hartford, CT 06108

Scott Dorsey

unread,
May 10, 1986, 11:54:58 AM5/10/86
to
In article <2...@atari.UUcp> dy...@atari.UUcp (Landon Dyer) writes:
>Here's a perverse thought: Has anyone done any research on
>architechures to help people writing /assembly language/? (Maybe the
>PDP-11, VAX or IBM-370 architechures are optimal, or maybe no one has
>ever considered making life easier for those who spend their lives
>coding "down unda.")

Ever seen the UCSD Pascal system? It had a Pascal compiler which
compiled down to P-Code, which was like a machine code with high-
level language features. The P-Code was then interpreted by the
machine, in my case an Apple II. I wrote an assembler in Pascal which
allowed you to code directly in P-Code. Compared to the 6502, it
was pure joy... array support, real numbers. It was slow, but better
than either the Pascal or the Basic.
I know what I like in an architecture. I could build an instruction
set that is just right for me. However, other people might not like
it. Assembly coding is a very personal thing. I think that something
like a P-code might be in order to allow any person to develop code
using his 'own' instruction set, and have it compile to machine code
for other machines. It would be mindblowingly slow, especially because
th architecture I would use would be unlike the machines I am forced
to work on.
BCD support? Come on.
And, no, I don't like using floating-point instructions to
manipulate characters.
--
-------
Disclaimer: Everything I say is probably a trademark of someone. But
don't worry, I probably don't know what I'm talking about.

Scott Dorsey " If value corrupts
kaptain_kludge then absolute value corrupts absolutely"

ICS Programming Lab (Where old terminals go to die), Rich 110,
Georgia Institute of Technology, Box 36681, Atlanta, Georgia 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!kludge

Gregory Smith

unread,
May 12, 1986, 7:35:20 PM5/12/86
to
In article <4...@rna.UUCP> d...@rna.UUCP (Dan Ts'o) writes:
>> Some years ago when I was learning 6800 assembler (anybody remember
>> D2 kits?) I used to first write everything in C and then hand compile it
>> into M6800 asm. ...

>> simplicity of the M6800 (dare I call it a RISC machine? :-)) makes this
>> easier than for something like a vax, but I can still do it; let's see:
>>
>> char i, j;
>> j = 0;
>> for (i = 0; i <= 10; i++)
>> j = j + i;
>>
>> clr j
>> loop1: clr i
>> cmp i, #10
>> bge out
>> lda j
>> adda i
>> sta j
>> lda i
>> inca
>> sta i
>> bra loop1:
>> out:
>
> Not too good, I'm afraid. The loop1 label is misplaced - you "clr i"
>on every iteration. Also #10 is octal 8 instead of 10. You probably don't want
>the colon at the end of the "bra" statement. You don't want "bge out" but
>"bgt out" (since you say i <= 10). And finally, i and j should be local
>variables on the stack. I can see why your professor laughed.

If you must flame, do it properly. As I remember, in Motorola assembler,
10 is decimal, $10 is hex. Besides, you don't expect that sort of detail
in a news posting, do you? You missed the biggie completely: there is no
such thing as `cmp i,#10' in 6800 code. I believe `inc i' could have been
done, instead of lda/inc/sta.

Hand-compiling is definitely a good idea when a good compiler cannot
be had. When you write the C code, you deal at a reasonably high level
of abstraction; when you translate, you worry about details only and forget
the higher meaning. One major problem is with maintenance: when people
modify the assembler code, they don't bother changing the C code!

There are cases where real compilers cannot be had which are good enough.
If someone knows of a C compiler for Z80 or 8080 which can produce code
half as good as that produced by hand, I would like to hear about it. I
am sure it could be done - but nobody would pay you enough to make it
worth the effort. It would almost be an AI project!

While debugging, you have to figure out if bugs are caused by problems
in the C code ( many of these will be found during the hand-compile ) or
by incorrect compilation. If you have a real compiler, no matter how
horrible, you can often debug the C code with that before hand compiling,
and then hand-compile in stages. 'In stages' implies that the parameter
passing convention for the hand code follow that of the compiler, which
may not be desirable - especially on the kind of machine we are talking
about here.


--
"Canabee be said2b or not2b anin tire b, if half thabee isnotabee, due2
somain chunt injury?" - Eric's Dilemma
----------------------------------------------------------------------
Greg Smith University of Toronto UUCP: ..utzoo!utcsri!greg

Robert Munck

unread,
May 12, 1986, 10:25:57 PM5/12/86
to
In article <2...@pyuxv.UUCP> s...@pyuxv.UUCP (25220-S Radtke) writes:
>
>> Some years ago when I was learning 6800 assembler (anybody remember
>>D2 kits?) I used to first write everything in C and then hand compile it

As a minor addition to the technique (and to show that you're not
forced to expose yourself to possible chromosome damage by using C)
I wrote the Navy's standard executive for their 16-bit line of computers
(AN/UYK-20, -44, AN/AYK-14) first in Ada (as it existed in 1979). Of
course, there were no Ada compilers then, so I hand-translated some of
the important algorithms -- task scheduling, memory allocation -- into
Pascal that I could compile and run. This was used to debug and tune
them for speed. Finally, I wrote some assembler macros supporting
IF-THEN-ELSE, looping, and subroutine invocation and hand-translated
the Ada/Pascal into assembler/macros.

The contract was competitive: a team from an unnamed mainframe
manufacturer wrote their own version, but all in assembler. There
was great emphasis on speed, so they wrote theirs as one giant
assembly program to avoid the overhead of subroutine calls. Mine
was divided into 60-odd modules. They spent their time on micro-
optimization of the assembly; I spent mine on the overall design.
Mine was about 10% faster.

The OS I'm writing now, for the 80386, will be in Ada from start
to finish. If, when it's up and running, I find that 90% of its
execution time is in 10% of the code (which is likely), I'll look
at that 10% for possible recoding of strategic modules in assembler.
I'll also keep the Ada routines that are replaced, for future
retargetting.
-- Bob Munck

joh...@uiucdcsp.cs.uiuc.edu

unread,
May 13, 1986, 4:07:00 PM5/13/86
to

I consider designing assembler code using a high-level language to be
"motherhood". I have always done it that way (since I wrote my first
big assembly program in 1976), I thought most "modern" programmers did
it that way, and I teach all my students to do it that way. Am I
hopelessly naive?

Peter Ludemann

unread,
May 14, 1986, 8:11:32 PM5/14/86
to

Not at all. I like to write my code first in Prolog, then translate
it into something low level like C. The high level code makes good
comments for the low level stuff.

her...@umn-cs.uucp

unread,
May 15, 1986, 5:18:00 PM5/15/86
to

I believe the military did some research to find architectures
which optimized programmer performance for assembly languages.
I don't know of any references, but someone from Johns Hopkins
applied physics lab once made the claim that the PDP-11 and the
ANYUK-20 (?sp) were far and away the best machines for helping
programmers produce working code quickly.
Interpreters for machines are also quite old; back before
any machines had floating point hardware, it was not an uncommon
practice to program in pseudo-ops which were then interpreted.
These pseudo-ops often looked much like P-codes. This is mentioned
in some of the introductory textbooks on computer languages.
Anybody else hear about anything like either of these?

Robert Herndon
...!ihnp4!umn-cs!herndon

S.Davidson

unread,
May 16, 1986, 2:07:49 PM5/16/86
to
> In article <2...@pyuxv.UUCP> s...@pyuxv.UUCP (25220-S Radtke) writes:
> >
>
> As a minor addition to the technique (and to show that you're not
> forced to expose yourself to possible chromosome damage by using C)
> I wrote the Navy's standard executive for their 16-bit line of computers
> (AN/UYK-20, -44, AN/AYK-14) first in Ada (as it existed in 1979). Of
> course, there were no Ada compilers then, so I hand-translated some of
> the important algorithms -- task scheduling, memory allocation -- into
> Pascal that I could compile and run. This was used to debug and tune
> them for speed. Finally, I wrote some assembler macros supporting
> IF-THEN-ELSE, looping, and subroutine invocation and hand-translated
> the Ada/Pascal into assembler/macros.
>
Ada was used as a high level specification of microcode for the Intel 432.
At that time they also didn't have a compiler, so hand translated and
optimized it. They even had to use an enhanced version of Ada to handle all
the things they wanted to do in microcode. (Source - talk by Dan
Hammerstrom at the 14th Microprogramming Workshop).

I think this is a good technique for microprogramming where no compilers are
available. Has anyone else used it?

--
Scott Davidson
AT&T Engineering Research Center
..!{allegra,ihnp4}!erc3ba!sd
(609) 639-2289
P.O. Box 900
Princeton, NJ 08540

Roy Smith

unread,
May 17, 1986, 10:33:35 AM5/17/86
to

In article <4...@rna.UUCP> d...@rna.UUCP (Dan Ts'o) flamed me for a
bunch of syntactic and semantic errors in my hand-assembly example. A
while later, in article <27...@utcsri.UUCP> gr...@utcsri.UUCP (Gregory Smith)
continued to find errors in my 6800 code. The lesson here is that if you
are going to post something to the net, better make sure it is right. Dan
and Greg are right that my translation was a mess.

The point I was trying to make (and so far, I havn't seen anybody
who has disagreed with me) is that HLL's make great tools for helping write
assembler. Once the code is written, the HLL version should be kept around
as documentation.

Rex Ballard

unread,
May 18, 1986, 1:48:47 AM5/18/86
to

[My long list of items on my "wish list for UNIX"]
[Reply indicating OS-9 has all these features]

Wonderful! Now, if we could just figure out how to get it on
VAXEN and 6/32's :-)

Seriously, anyone considering even a UNIX port, should look had the
principles and ideas behind OS-9 68K. It is to UNIX, what MS-DOS 2.0
was to CP/M 2.2.

AT&T and BSD could both benefit from at least employing the "concepts"
of OS-9.

Failing this, we can hope that Micro-ware broadens it's offerings to
other processors.

Rex Ballard

unread,
May 18, 1986, 2:19:50 AM5/18/86
to
In article <2...@atari.UUcp> dy...@atari.UUcp (Landon Dyer) writes:
>

>
>Here's a perverse thought: Has anyone done any research on
>architechures to help people writing /assembly language/? (Maybe the
>PDP-11, VAX or IBM-370 architechures are optimal, or maybe no one has
>ever considered making life easier for those who spend their lives
>coding "down unda.")
>--

Actually, a number of attempts at this have been made.
The Z-80 attempted to use an instruction set which was easier to remember
than their Intel counterparts. The 68K attempted to improve this by
providing the full matrix of operators and operand addressing modes.

The latest improvement is the Transputer. Seems that they don't use
"Assembler memnonics", but instead use primitive equations.

For example instead of:

move A,B
add (R1),(R4)

They use the more intuitive

A=B
(R4)=(R4)+(R1);

Or something like it. As pointed out before, early predecessors of C
were little more than attempts at a "generic assembler". It was only
when it was expanded into a full LALR grammer, that the whole issue
of optimizing came up.

Unfortunately, instructions sets are normally determined by the manufacturer.

Strangely enough, one of my jobs was to go the other direction. Instead of
taking C code and turning it into assembler, I had to convert assembler
into C. Has anybody figured out a way to do this with a compiler!!!
It was real cute to do this with the 8085, and 6502.

Richard Jennings

unread,
May 18, 1986, 11:24:42 PM5/18/86
to
>>Here's a perverse thought: Has anyone done any research on
>>architechures to help people writing /assembly language?

>Actually, a number of attempts at this have been made.

>The latest improvement is the Transputer. Seems that they don't use
>"Assembler memnonics", but instead use primitive equations.
>
>For example instead of:
>
>move A,B
>add (R1),(R4)
>
>They use the more intuitive
>
>A=B
>(R4)=(R4)+(R1);
>
>Or something like it.

Not really. According to the 10 January 86 Inmos Compiler Writer's
Guide for the Transputer what you do to add two numbers is to
push the numbers onto the 3-register hardware stack and hit add.

Anybody who remembers the Rockwell CMOS/SOS Forth machine would
*love* the transputer. Occam (in my view) is a red herring. There
are some really slick features which the Occam environment 'hides'
from the user.

Assuming that A and B are local variables, then code to add A to B
and store the result in B is

ldl A; x7f
ldl B; x7e
add; xf5
stl B; xde

my macro hex

where A is stored f locations from the 'workspace pointer', B is
stored e locations from the 'workspace pointer'.

If this interests you, bug your local Inmos rep for a Compiler
Writer's Guide.

r

Radford Neal

unread,
May 20, 1986, 3:32:18 PM5/20/86
to

Yes. The modern programmer only writes in assembly language if he needs
to utilize machine features not possible from a high level language or
he needs to get that last bit of speed. In neither case is writing the
program in C first especially helpful. Note that nobody writes "big"
assembly language programs anymore.

For example, the last chunk of assembler (VAX) I wrote was for catching
a Unix signal, forking, letting the parent continue, and doing a core
dump (via setting the trace trap bit in the PSL) in the child after
fiddling the stack frames to reflect the situation before the signal
in the parent, so the debugger will print out the right stack trace,
etc. Part of this was in C, but the part that wasn't was completely
unexpressible in C.

The last (relatively) large piece of assembler (68000) I wrote was a set
of bit-map manipulation routines. This was in assembler for efficiency.
It used techniques such as completely unrolling a single-instruction
loop (executed at most about 50 times) and branching into it at the
right place, based on the instructions all being two bytes in size. Not
the sort of this one does in C, or that one does in assembler if you
start from a C program. It's fast though.

All this may not be true if you're stuck writing assembler on a machine
without an adequate high level language, but I presume that this is no
longer common. It also doesn't directly apply when you're writing a
large assembler program because you're learning about assembly language.
Starting from C might be good in that case. Then again, maybe it would
just lead to the student not leaning anything about machine architecture
that isn't expressible in in C.

Radford Neal

Joe Buck

unread,
May 22, 1986, 6:44:23 PM5/22/86
to
In article <1...@vaxb.calgary.UUCP> rad...@calgary.UUCP (Radford Neal) writes:
>>
>> I consider designing assembler code using a high-level language to be
>> "motherhood". I have always done it that way (since I wrote my first
>> big assembly program in 1976), I thought most "modern" programmers did
>> it that way, and I teach all my students to do it that way. Am I
>> hopelessly naive?
>
>In article <3700003@uiucdcsp>, joh...@uiucdcsp.CS.UIUC.EDU replies:

>Yes. The modern programmer only writes in assembly language if he needs
>to utilize machine features not possible from a high level language or
>he needs to get that last bit of speed. In neither case is writing the
>program in C first especially helpful. Note that nobody writes "big"
>assembly language programs anymore.
>...

>All this may not be true if you're stuck writing assembler on a machine
>without an adequate high level language, but I presume that this is no
>longer common.

Those of us who do real-time digital signal processing write lots of
assembler, and for heavily pipelined, irregular architectures. The
kinds of things we program can be done in high-level languages, only
orders of magnitude slower. Compilers are few and far between. I
personally follow the "human compiler" approach, translating C to DSP
code and letting the C routine serve as part of the documentation.
I find C to be the most natural language to do this with because of
its "close-to-the-machine" style.

Obviously we need net.dsp to discuss such things.
--
- Joe Buck {ihnp4!pesnta,oliveb,csi}!epimass!jbuck
Entropic Processing, Inc., Cupertino, California
Better living through entropy!

k...@ut-ngp.uucp

unread,
May 24, 1986, 12:49:33 AM5/24/86
to
> [...] Note that nobody writes "big"
> assembly language programs anymore. [Radford Neal]

I have written many, many lines of assembler in the past 5+ years. (I don't
know the exact number of lines, but it has to be in the tens of thousands.)
I'm currently finishing off a project which will be close to 10,000 lines,
all in assembler. BTW, I have never used an HLL as a "precursor"; I think
that is a truly preposterous idea. I have, however, used various sorts of
assembler-like "pseudo-codes", on occasion.

--
The above viewpoints are mine. They are unrelated to
those of anyone else, including my cat and my employer.

Ken Montgomery "Shredder-of-hapless-smurfs"
...!{ihnp4,allegra,seismo!ut-sally}!ut-ngp!kjm [Usenet, when working]
kjm@ngp.{ARPA,UTEXAS.EDU} [Old/New Internet; depends on nameserver operation]
k...@ngp.CC.UTEXAS.EDU [Very New Internet; may require nameserver operation]

hof...@hdsvx1.uucp

unread,
May 24, 1986, 8:30:06 AM5/24/86
to
>>> Some years ago when I was learning 6800 assembler (anybody remember
>>> D2 kits?) I used to first write everything in C and then hand compile it
>>> into M6800 asm. ...


I used this technique in the late 70's at Texas Instruments, when I was given
a summer to code some mathematical utilities against some *very* tight size
and timing considerations. I first wrote the code in FORTRAN (it was all we
had, plus it wasn't so bad for short math programs like that), debugged it
completely, then compiled it with the list option, and hand-optimized loops
and register usage. It worked like gang-busters, and probably increased my
productivity by 200% since it only took me a month to do the whole set when
they had expected it to take three months-- especially since it was my first
contact with assembly language!

I used this technique again several years later to do some programming on a
Motorola 68000 -- not only is it fast, but it's a great way to learn a new
assembly language, and it provides an extra layer of testing and documentation.
I never have understood why everyone doesn't use it. Even if your compiler
is a real dog, you can fix bad code that you understand a lot easier than you
can write good code from scratch.

Richard Hoffman
Schlumberger Well Services
hoffman%hds...@slb-doll.csnet

Gregory Smith

unread,
May 25, 1986, 12:38:33 AM5/25/86
to
In article <1...@vaxb.calgary.UUCP> rad...@calgary.UUCP (Radford Neal) writes:
>
>Yes. The modern programmer only writes in assembly language if he needs
>to utilize machine features not possible from a high level language or
>he needs to get that last bit of speed. In neither case is writing the
>program in C first especially helpful. Note that nobody writes "big"
>assembly language programs anymore.
>...

>All this may not be true if you're stuck writing assembler on a machine
>without an adequate high level language, but I presume that this is no
>longer common. It also doesn't directly apply when you're writing a ...
>
> Radford Neal

It's still pretty common, and probably will be for some time. Consider
the ratio

Quality of hand-written code
------------------------------------
Quality of code from a good compiler

I would expect around 1.2 for a 68000 (the only 68K C compiler I've seen was
about a 1.7 but it wasn't *good*). However, I imagine a *good* 6502 compiler
would do well to score a 10 ( There's still a *lot* of those little beasties
out there...). More importantly, there are those great little single-chip
computers ( 8048, 6805 etc ) which would probably also come up in the 10-15
range on the above ratio. Since you have limited ROM ( meaning that every
byte counts and you can't write a *really* long program, anyway ), I doubt
that anyone has tried to write a compiler for these guys. You program them
in assembler, that's all. If you don't think these things are common, take
a good look at auto electronics.

Someday, auto computers may be programmed in C ( or D or E..), but they
will probably still be hand-coding digital watch controllers. When *those*
are compiled, they will be hand-coding automatic electronic pencil
controllers...

The world isn't all 68K's and vaxen and stuff like that...

--
"We demand rigidly defined areas of doubt and uncertainty!" - Vroomfondel

Roger Shepherd INMOS

unread,
Jun 7, 1986, 6:33:34 AM6/7/86
to
In article <4...@ccird1.UUCP> Rex Ballard writes:
> All High level languages do is attempt to organize the
> macros and subroutines that might otherwise be written in
> assembler. They also provide some convenient and well
> standardized procedures and parameter passing conventions.

I disagree with view. A high level language should be more than a glorified
macro assembler. It is possible to design a high level language so that
it actually has a clean, coherent and useful semantics. It can then be used
as a way of describing algorithms mathematically. Such langauges can even
be compiled!

Of course, very few existing HLLs have such a semantics. However, I have been
useing one that does and one which compiles effeciently onto a processor.
Furthermore, its mathematical semantics is useful and COST-EFFECTIVE. One
of the projects which I have been involved with is the production of a
HLL coded version of the IEEE Floating Point standard. Out initial efforts
at writing and testing said package were very labourious and insecure.
We tried testing the package by comparing it with what were believed to be
correct implementations. This lead us to find an error in a currently
available floating point chip! The adoption of formal mathematical methods
(to which our HLL is suseptable) means that a correct version was produced
in a few weeks.

--
Roger Shepherd, INMOS Ltd, WHITEFRIARS, LEWINS MEAD, BRISTOL, UK
USENET: ...!euroies!shepherd
PHONE: +44 272 290861

Rex Ballard

unread,
Jun 12, 1986, 3:50:55 PM6/12/86
to
In article <3...@euroies.UUCP> shep...@euroies.UUCP (Roger Shepherd INMOS) writes:
>In article <4...@ccird1.UUCP> Rex Ballard writes:
>> All High level languages do is attempt to organize the
>> macros and subroutines that might otherwise be written in
>> assembler. They also provide some convenient and well
>> standardized procedures and parameter passing conventions.
>
>I disagree with view. A high level language should be more than a glorified
>macro assembler. It is possible to design a high level language so that
>it actually has a clean, coherent and useful semantics. It can then be used
>as a way of describing algorithms mathematically. Such langauges can even
>be compiled!

In a sense, I agree with you. What you're describing is the human->compiler
side of what a language should be. I am describing what the compiler->machine
side should be.

How good the interface between the two view is, is a good definition of
the quality of a compiler. If restricitions are loose enough reguarding
the nature of the variables, definitions, and operators, one could express
algorythms in "structured english" (as opposed to the hodge-podge we normally
atribute to the language), and have it compiled into a sequence of subroutines,
macro expansions, and instructions which could be executed by a computer.
On the other hand, "structured english" is a little tedious to type. Perhaps
a "Decompiler" could translate 'C' code tokens into structured english :-).

Not only could it be compiled into instructions, it could also be compiled
into useful data bases, flow charts, structure charts, and even "design
completers" which could find common primitives in complex systems, and
express them in the same structured language. These 'design completers'
are more commonly referred to as optimization, but often the input, as
well as the assembler could be optimized by machine, leaving only the
most commonly executed parts of the code to be "super-optimized" by hand,
or even hardware.

>Of course, very few existing HLLs have such a semantics. However, I have been
>useing one that does and one which compiles effeciently onto a processor.

Which language is this?

>Roger Shepherd, INMOS Ltd, WHITEFRIARS, LEWINS MEAD, BRISTOL, UK

Nice company, looking forward to seeing more info on your products.

Phil Mason

unread,
Jun 13, 1986, 11:04:11 AM6/13/86
to
As Chuck Moore (the inventor of FORTH) says :
"You may have noticed that FORTH is a polarizing concept. It is
just like religion and politics, there are people who love it
and people who hate it and if you want to start an argument,
just say, "Boy, FORTH is a great language."

Well, I take a more positive view where FORTH is concerned.

FORTH represents a different kind of computing environment. It is its own
operating system, compiler, assembler, generic user-interface, development
system - you name it. FORTH can be thought of as a set of software tools, a
development system, a high-level language, a low-level language, or even an
application specific language. It depends on the viewpoint of the actual
user/programmer and the specific application whether or not FORTH fits into
any category at all. FORTH is easily extensible in a way that few other
computer languages are. You customize it for the application at hand.

FORTH was designed at a time that memories and mass-storage devices were
small. FORTH was created to match several criteria :
* small size : theaded-code is about a small as you can get.
* simplicity : FORTH's structure IS simple when it really comes down to it.
* versatility : if you need to customize, no problem. Need speed? Embed
assembly code among your actual FORTH source using a built-in
assembler.

FORTH really wasn't designed to make intelligent use of large memories,
handle large applications, manage large mass-storage devices; but, FORTH
CAN be customized, if desired, to tackle these enviroments. The only reason
I would bother to extend FORTH for big machines would be for portability of
applications from/to ANY micro-, mini- or super-computer. I have witnessed
the same EXACT source code run on a big computer, a workstation and a
microcomputer. Then again, you can do this with other languages as well,
if you are careful. You generally can't do it with operating systems due
either to the nature or size of them or copyright/proprietary/license hang-ups.

There are three main dialects of the language : FIG, 79-Standard,
and 83-Standard. Because FORTH is easily customized, there are literally
THOUSANDS of unique installations of the language around the world. This
can cause problems with portability unless code was based on one of the
major dialects. FORTH is easy to bring up on most computer systems.
A small amount of assembly coding is necessary for the innermost interpreter
and I/O, then the vast majority of FORTH is written in FORTH itself.

Only when you have really tried to learn a language and its methods and
meaning, can you appreciate its strong points and shortcomings. Your first
computer language was probably to hardest for you to learn. FORTH is different
enough in philosophy to present an intellectual challenge to those who want it.
You can write bad, unreadable, undocumented and thoroughly icky code in ANY
language you choose.

If you want to have an small, simple, extensible, self-contained development
environment that will run on almost anything, FORTH is probably for you.
If you want to go the regular route of C or PASCAL with a relatively hard
to customize operating system that you can't run on your micro at home, fine.
I happen to prefer using the enviroment suited for my application. It just so
happens that I see applications for FORTH where others may not.

---
Sorry for you guys in net.arch who are getting tired of HLL-->Assembly
discussions, but I think that FORTH is an interesting topic in the context
of direct treaded-code engines and self-contained environments as well as
a language for describing algorithms.

Flames (obnoxious or not) to /dev/null.
Intelligent criticism accepted.


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Kirk : Bones ? Phil Mason, Astronautics Technical Center
Bones : He's dead Jim. {Your Favorite System}!uwvax!astroatc!philm

My opinions are mine and not necessarily those of my employer.
(I would like to think that my employer believes in them too.) :-)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

kep...@pavepaws.uucp

unread,
Jun 14, 1986, 10:39:38 AM6/14/86
to
In article <4...@astroatc.UUCP> ph...@astroatc.UUCP (Phil Mason) writes:
>* small size : theaded-code is about a small as you can get.

Pardon, but I've never quite understood what threaded-code is.
Could somebody give me an explanation of
o what it is
o why it is fast
o why other major languages don't use it or don't admit to using it


Thanks much (Followups to net.lang).

;-D avid K eppel ..!ucbvax!pavepaws!keppel
"Learning by Osmosis: Gospel in, Gospel out"

Wayne A. Christopher

unread,
Jun 16, 1986, 12:28:58 PM6/16/86
to
Please explain to us how some of the features you mention work. What is
"threaded"? How can forth be both a "high level language" and "low level
language"? How can it be customized? And most importantly, what does it
look like? Can you include a short program in both C and forth so we can
compare them? Thanks,

Wayne

Chad R. Larson

unread,
Jun 16, 1986, 3:35:47 PM6/16/86
to
In article <21...@peora.UUCP> j...@peora.UUCP (J. Eric Roskos) writes:
>
>I'm not entirely sure what this has to do with net.arch, but aside from
>that, this raises an issue that's been puzzling me for several years.
>...what is the real *advantage* of FORTH?

You're right, it probably doesn't belong here, so I've directed
followups to net.lang.forth, where it may start a new discussion
and/or flame war.

The two main advantages of Forth are user extensibility and
interactive development. Most of the stuff about threaded
dictionaries and all that you hear are there to bring about the two
above features. (I'm talking about a true Forth here, not just a
compiler that can handle Forth syntax. A true Forth is an environment
for the programmer, not only a set of tools.)

The interactive environment is one where you can test each new
function or primitive (called "words") right from your keyboard as you
enter it. There is not (at least in the usual sense) an edit-compile-
link-load-debug cycle. You figure out a function you need, type it in
and test it. If it works as you wish, it is now available to be used
in any new words _right_now! You can test any portion of your program
from the highest level definition to the lowest primitive at any time
to satisfy yourself on their working. Not under a debugger... just as
you are working along. You don't have to wait several minutes for
MAKE to do its thing in order to test a one word change.

The user extensibility is the real productivity aid, and the thing
that gives Forth its "Forthness". Programming in Forth is actually
developing a new language, specifically tailored to your application.
You start with the set of primitive words all Forth's come with and
string them together to define new words that do some function you
need done. Those new words are used to define higher function words
until you eventually have a rich vocabulary of functions keyed exactly
to what you are trying to do. When your program is done, and the user
inevitably requests "just one more thing", you can frequently add it
in a couple of minutes by creating another word made up of six or
eight words you already have, since those words are so well fitted to
what you are up to. Programs only grow by a dozen or so bytes when
you do this.

Sometimes Forth fans tend to take on religious fervor. This sometimes
is off-putting to interested people, but in many ways Forth is a
religion (or at least a state of mind). When you put your Forth hat
on, you tend to look at programming problems in a different light.

If I haven't bored you yet, some good books are "Starting Forth" and
"Thinking Forth" by Leo Brodie. The first deals with the language and
an implementation of it, the second with how good Forth programmers
tend to approach problems.

Disclaimer: Others undoubtably have their own Favorite Forth
Features. Remember the religious aspects.

: ?DISAGREE IF FLAME ELSE :-) THEN ;

--
"I read the news today...oh, boy!" -John Lennon
_____________________________________________________________________
UUCP: {mot}!anasazi!chad Voice: Hey, Chad!
Ma Bell: (602) 870-3330 ICBM: N33deg,33min
Surface: International Anasazi, Inc. W112deg,03min
7500 North Dreamy Draw Drive
Suit 120
Phoenix, AZ 85020

David England

unread,
Jun 17, 1986, 8:42:13 AM6/17/86
to
In article <4...@ccird1.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
>In article <3...@euroies.UUCP> shep...@euroies.UUCP (Roger Shepherd INMOS) writes:
>>In article <4...@ccird1.UUCP> Rex Ballard writes:
>>> All High level languages do is attempt to organize the
>>> etc

>>
>>I disagree with view. A high level language should be more than a glorified
>>macro assembler.
>>etc

>
>In a sense, I agree with you. What you're describing is the human->compiler
>side of what a language should be. I am describing what the compiler->machine
>side should be.
>
... and as far as the human->compiler side of things is concerned the days
of languages are numbered. Visual programming and spatial data management
are the "languages" of the future. In five years time there should be no
net.lang* :-). If this was a bit mapped screen I would draw this article
as a vt100 icon being thrown into a trash can icon :-).
--
Dave uucp: ...!mcvax!ukc!dcl-cs!de
arpa: de%lancs.comp@ucl-cs

"I like to go where the action is. Move in, move out. And no paper work."

Phil Mason

unread,
Jun 17, 1986, 12:37:55 PM6/17/86
to
>compare them? Thanks.

By the way, follow-ups are to "net.lang.forth". Subscribe now! I think we've
exhausted the architectural arguments pertainent to "net.arch".

"Threaded code" refers to the technique of describing algorithms by the use of
a list of the addresses of all of the component routines in the proper order.
A very simple interpreter recursively follows each address entry in the
algorithm, following each of the address entries for each component routine
and each component of each component, etcetra, until machine code is found at
the lowest level. The machine code is executed and then you are popped up a
level and continue interpreting/executing until you are finished with the
routine.

Threaded code is incredibly compact and simple to analyze. The major
drawback is the expense of subroutine calls (i.e., thread following) on most
computers. Threaded code is as least efficient as the product of a typical
code generator of a medium quality compiler. Good hand coding and a good
compiler can almost always perform better than raw threaded code. FORTH, which
uses threaded code, allows embedded assembly language among the threaded code
for speed, if desired.

Threaded code works best if all arguments and results of the routines are
handled via a stack. In FORTH, the argument/result stack is called the
"parameter stack". Routines take from the stack the arguments that they need
and put onto the stack their results. The parameter stack is directly analogous
to the stack of an HP calculator. For example, the "+" key takes two parameters
off the stack and puts their sum onto the stack. FORTH uses PostFix or Reverse
Polish Notation as do HP calculators.

I don't happen to have meaty examples lying around right now, but here is some
simple FORTH code to do factorials (iteratively) :

: FACT ( n -- ) ":" means begin a FORTH routine definition called "FACT".
"( n -- )" is a comment that means FACT takes one
number off of the stack and returns nothing.

DUP "DUP" duplicates the item on the top of the stack, i.e.
there are two "n"s on the stack now.

1 "1" is a number to be pushed onto the stack - the stack
now looks like this : "( n n 1 -- )", "1" is on the
top of the stack.

DO "DO" takes the top two items off the stack - the first one
is the beginning value for the loop and the second
one is the ending value of the loop. The loop will
go from 1 to n-1.

I "I" puts the current value of the loop index onto the
stack.

* "*" takes the top two items off the stack and replaces
them with their product.

LOOP "LOOP" increases the index by one - if it is equvalent
to the ending value, exits the loop; otherwise,
execution branches to the word after the "DO" word.

. "." prints the top item of the stack on the terminal.

CR "CR" issues a carriage return on the terminal.

; ";" ends the definition of FACT.

Upon typing this definition into the FORTH interpreter, try this :

6 FACT

You will get :

720

You can easily write a new definition to make a list of factorials :

: FACTLIST ( n -- )

1 +
1
DO
I DUP
.
FACT
LOOP
;

You should be able to tell me what this one does.

As you can see, FACT is able to be used exactly as if it were a primitive
FORTH word. FORTH is extensible in just this way. You can create vocabularies
of words for use with your application. The implicit parameter passing can
make your life alot easier. Good documenting skills are really essential
if you want to be able to maintain any code in any language.

FORTH is a high-level language because you can abstract detail and hide
information in modules, just like you can in many other languages.

FORTH is a low-level language since it is really implemented as a
portable pusedo-machine that actually runs FORTH as its native code. All FORTH
words are accessible to the programmer, even the ones used to complete and
enhance the FORTH interpreter. Only a very small part of FORTH is actually in
assembly language. The rest is in FORTH itself!

FORTH is like a software toolbox AND like a sophisticated modular language
rolled into one. I believe that it has the potential to be of both worlds.


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


Kirk : Bones ? | Phil Mason, Astronautics Technical Center

Bones : He's dead Jim. | Madison, Wisconsin - "Eat Cheese or Die!"
- - - - - - - - - - - - - - - -|
...seismo-uwvax-astroatc!philm | I would really like to believe that my
...ihnp4-nicmad/ | employer shares all my opinions, but . . .
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Henry Spencer

unread,
Jun 17, 1986, 2:17:43 PM6/17/86
to
> > All High level languages do is attempt to organize the
> > macros and subroutines that might otherwise be written in
> > assembler...
>
> ...A high level language should be more than a glorified macro assembler...

In particular, a high-level language should do its best to check the
programmer's code for errors -- something no macro assembler will ever
be able to do very well. Features like type checking are not mere frills;
they catch many errors. Decent high-level languages give the translator
a much better grasp of what the programmer is doing, so it can point out
inconsistencies. (It can also optimize the code better -- most hand-written
assembler is not highly optimized, because it's too much work for a human.)
--
Usenet(n): AT&T scheme to earn
revenue from otherwise-unused Henry Spencer @ U of Toronto Zoology
late-night phone capacity. {allegra,ihnp4,decvax,pyramid}!utzoo!henry

Griff Smith

unread,
Jun 18, 1986, 8:54:12 PM6/18/86
to
[deleted fine, clear comments about the importance of type checking]

> Decent high-level languages ...
> ... can also optimize the code better -- most hand-written


> assembler is not highly optimized, because it's too much work for a human.

Most UNIX system hand-written assembly language is suboptimal because
the programmers haven't learned the coding idioms and/or don't give a
damn. Humans can do a fine job of optimizing assembly language; they
are much better at recognizing special case optimizations that are too
expensive/unlikely to put in an optimizer. My code, when it worked,
was tight and more optimized than I have seen in most compilers. What
usually made my assembly code sub-optimal was the need for
parameterization: I had to MOV a parameterized zero into a register
instead of CLRing it, etc.

I will completely support a paraphrase of Henry's quote: most hand-written
assembler code is incorrect; finding all the inconsistencies is too much
work for a human. I once spent three days finding a "character with a 3
in it" when I intended to declare an "array of 3 characters". Lint would
have found it in 30 seconds.

> --
> Usenet(n): AT&T scheme to earn
> revenue from otherwise-unused Henry Spencer @ U of Toronto Zoology
> late-night phone capacity. {allegra,ihnp4,decvax,pyramid}!utzoo!henry

Keep that revenue coming, folks. Thank you for using AT&T.

--

Griff Smith AT&T (Bell Laboratories), Murray Hill
Phone: (201) 582-7736
UUCP: {allegra|ihnp4}!ulysses!ggs
Internet: g...@ulysses.uucp

Rex Ballard

unread,
Jun 19, 1986, 9:29:03 PM6/19/86
to
In article <21...@peora.UUCP> j...@peora.UUCP (J. Eric Roskos) writes:
>> When coding in Forth, you do all the coding in the high level language
>> (and can interactively test the code).

>
>I'm not entirely sure what this has to do with net.arch, but aside from
>that, this raises an issue that's been puzzling me for several years.
>In fact, I've even written a couple of postings in the past on it, then
>usually cancelled them because the question was so ambiguous.
>
>The question is, *why* is Forth described in such glowing terms, when the
>attributes that are listed as the reason for such a description are not
>particularly unusual?
>
>Thus my question... what is the real *advantage* of FORTH?

Good parts of forth:
Interactive - imagine, you don't have to go through a make/compile/assemble
phase to test your code. In fact you can experiment with a
few variations to find the best answers. It completely eliminates
the need to "patch" in machine code.
Small - On a machine such as the 8085 or the 6502, a fully servicable
Kernal can fit in as little as 2K, including multi-tasking.
On a machine with a more powerful instruction set, as little
as 1K can be used. For controllers and special purpose boxes,
or in a situation where a large kernal is not desired, this
is another win for forth.

Fast - Compared to hand-written assembler, FORTH is slow, as much as
4 to 10 times slower. However compared to interpreters, and
compilers that lack good optimization, it is very fast, often
100 times faster than BASIC. Forth is also quite easy to
benchmark, since you are basicly timing about 26 primitives
and an "inner interpreter".

Maintenence - Most forth applications are subject to frequent change without
notice. Not so much bug fixes as things like "yesterday I tracked
one star, now I want a different one". Robotics, Laser fusion
labs, and a variety of other situations can't wait three weeks
between runs while the enhancements are re-checked.

Modular - Because the programmer is responsible for the parameter stack
large functions are seldom used. The result is that routines
are build on other routines, much the way a VLSI circuit can
be built up from simpler circuits. You can make a DQ flipflop
with just a few gates, use the macro 8 times to make a register,
use the register macro a few times to make bus controllers, add
a little "glue", and have a CPU on one chip. Forth tends to
do the same thing in software. Another advantage is that if
you wish to change a time-out value to tune the system, you
can change or expand the original definition and the "callers"
won't need to be "re-linked". Obviously, an archetecture
with fast "calls" can easily capture the same market.

Builds/Does -This is the one novices scratch their heads over for days.
It is also a very powerful concept. If I want to add subroutine
and call it using a macro, I could write the subroutine, then
write the macro. Effectively, I would be enhancing the language.
Forth has a mechanism to do this for you. This allows you to
build operators that define their own storage, and even perform
their own operations, just by referencing them.

Design Discipline/Creativity - Because forward referencing is very difficult,
there are two basic ways to aproach a project in forth. On is to
have a truly complete design, in which common modules have already
been identified, or you can "create" your way up to an incomplete
design. Things like "transform structure A to structure B" have
to be done field by field, which often means that similar fields
can use the same routines. If the structure is complex enough,
it may even be broken down into smaller substructures. As a result,
a complex record can be transformed in as little as 100 lines of
code, where normally a 2000 line 'C' program might have been tried.
This bottom up approach often leads to very tight "macro-level"
designs. If the programmer wishes, he can design his way up,
identifying common types of data and common operations that
could be performed on it. This ultimately leads to an Object
Oriented Design, almost by accident. Forth is not an object
oriented language, but Object Oriented Design is definitely
a valuable skill.

Hardware functions can be done in software - Demand paged memory, various
types of caching, management of various tasks, and peripheral
control can be efficiently done in software. Even tricks like
multi-computing (each with their own ram and storage, working
on different parts of the problem at the same time), can be
almost trivial in forth. This is often done in Robotics where
"fingers" and the "arm" are controlled by different processors
under the direction of a master processor.

Extensibility - Not only can the language be extended, but the Forth
"operating system" can be extended as well.

Of course other languages, such as Lisp, Smalltalk, and Prolog have many
of these features (smalltalk classes are very similar to forth builds/does
clauses). They also require powerful machines to run them effectively.
The main advantage of forth is it's "elegent simplicity". Many of these
other languages almost look like decendants of forth.

Forth has disadvantages too:

No OBJECT modules - There is no such thing as an object code library.
Everything depends on the user's access to the source. Without
it, there is little one can do to change the system. It is
possible to "decompile" the object, but the source must be
reloaded. Object can be saved as a whole unit, but not in
loadable pieces. Some Forths are able to do this, but
the tradeoffs are efficiency and memory size.

No OBJECT level portability - Although the source code to a forth
application can be made to run on practically any machine,
everything, including the kernal must also be loaded. Any
Operating System vectors are unique to the system.

Organization - There is almost no organization and little documentation
to a standard forth Kernal. You can do a "vlist" and every
word it knows about spews out, in the order of definition.
Source is organized in terms of "screens", which makes things
easy to read, but requires a good associative memory on the
part of the programmer. Here Smalltalk or NEON are ultimately
better.

Information - Since there are no libraries in forth, and most people
have little desire to give away source to their favorite
utilities, the wheel is often re-invented, or at least
hand entered. You can get some utilities via compuserve
and various bulletin boards, but the good stuff, like
animation graphics are well protected from telephones.
Rumor has it that Atari and Activision have some of the
best forth libraries.

Religion - Forth programmers defend their language like fundamentalist
preachers defend the "7 day creation". Infix notation, parameter
management, class definitions, object oriented design, hierarchal
"definition directories", and "standard entry points" have all
been proposed, and rejected. It took 4 years for the '83 standard
to accept the existance of DTL's and STL's, which run 8 to 50 times
faster than the '79 standard on some machines (not all, but some).
Many valuable contributions to general languages are made by people
who get "fed up" with the forth camp and create their own languages.
NEON is a good example.

Archetecture - Any machine which can support the incredible depth of calls
generated by forth, could also run other languages almost as quickly.
Forth is usually used to hide the archetectural deficencies of the
host chip, this is what makes it so popular on 8080's and 6502's,
but a "toy language" on 8086's and 68K's.

Advantages are better elsewhere - You can get all the advantages of a
Forth language in assembler. Just write the primitives as calls.
You can even eliminate the "load register" instructions. You
sacrifice the interactive nature, but a good symbolic debugger
will give you that.


Summary:
Anyone who is designing system level archetectures, operating systems,
or languages, and has not looked at forth, should spend about as long
with that "system" as they spent in the learning stages of C and Unix,
say a month or two, working in forth on an "El Cheapo" computer like
an Atari or a C-64, preferably doing something like "McPaint in Forth"
or some similarly trivial task involving graphics. You can almost
write your own forth in about three months (part time) just by reading
Brodie.

FORTH isn't a panacea, it's more of a Pandora's Box, but there are
some good principles, concepts, and disciplines that can greatly
increase your effectiveness as a software (or hardware) engineer.
It is also a veritable Gold Mine of ideas, but to get to the gold,
you've gotta move some dirt.

Henry Spencer

unread,
Jun 20, 1986, 3:13:16 PM6/20/86
to
> > Decent high-level languages ...
> > ... can also optimize the code better -- most hand-written
> > assembler is not highly optimized, because it's too much work for a human.
>
> Most UNIX system hand-written assembly language is suboptimal because
> the programmers haven't learned the coding idioms and/or don't give a
> damn. Humans can do a fine job of optimizing assembly language; they
> are much better at recognizing special case optimizations that are too
> expensive/unlikely to put in an optimizer. My code, when it worked,
> was tight and more optimized than I have seen in most compilers...

Note that I said "highly" optimized. When I say that, I mean the sort of
code where one spends long periods of time re-writing the code to make
the code a few words shorter or a few microseconds faster. Note also that
I was not talking about "most" compilers -- I was talking about what the
language could do, i.e. what a really hot optimizing compiler could do.
The very best optimizing compilers are just as good at recognizing special
cases and exploiting them as humans, and they are much better at doing the
bookkeeping necessary to do this without error. Building such a compiler
is a tremendous amount of work, of course, not least because one must teach
it about all those obscure special cases. (The Bliss-11 compiler actually
looked to see if it could save a word of "immediate" data by using the
next instruction's opcode as the data. Few humans go that far!) It also
tends to be huge and slow. But once you've got it, it's a much better way
of producing really hot code than doing it yourself.

I fully agree that human assembler programmers can easily produce rather
better code than the mediocre results from many compilers.

r...@ccird1.uucp

unread,
Jun 24, 1986, 11:37:05 AM6/24/86
to
In article <2...@comp.lancs.ac.uk> d...@comp.lancs.ac.uk (David England) writes:
>In article <4...@ccird1.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
>>In article <3...@euroies.UUCP> shep...@euroies.UUCP (Roger Shepherd INMOS) writes:
>>>In article <4...@ccird1.UUCP> Rex Ballard writes:
>>>> All High level languages do is attempt to organize the
>>>> etc
>>>
>>>I disagree with view. A high level language should be more than a glorified
>>>macro assembler.
>>>etc
>>
>>In a sense, I agree with you. What you're describing is the human->compiler
>>side of what a language should be. I am describing what the compiler->machine
>>side should be.
>>
>... and as far as the human->compiler side of things is concerned the days
>of languages are numbered. Visual programming and spatial data management
>are the "languages" of the future. In five years time there should be no
>net.lang* :-). If this was a bit mapped screen I would draw this article
>as a vt100 icon being thrown into a trash can icon :-).
>--

Good point Dave, I hope you are right, that the era of "text oriented"
systems is on the way out. What information do people have on
visual programming? What effects will this have on archetecture?
I have seen a few "flow chart" languages, and yes, they do look promising.
I would hope that "structure chart" languages as well as "graphic libraries"
will also begin to evolve.

Of course in terms of "system archetecture" the intellegent graphics "terminal"
combined with the high power "file server" opens some very interesting
opportuntites in parallel programming as well.

P.S. I set the "follow-up" to net.lang because their is probably more
info there.

Phil Mason

unread,
Jul 1, 1986, 10:11:25 AM7/1/86
to
Follow-ups to net.lang.forth. Subscribe now, if interested.

In article <3700005@uiucdcsp> joh...@uiucdcsp.CS.UIUC.EDU writes:
>
>The main reason that Forth is popular is its programming environment.
>Interactive programming environments that support an incremental
>development of an application is the best way for one or two people.
>If they have not built such an application before then they can quickly
>fix bad design decisions and are essentially using the language as a
>rapic prototyping tool. If they have, then they can reuse most of the
>code from the earlier application.

Forth is great for reusable software tool writing. With Forth, it is possible
to give each sofware engineer much more of the entire application to work on
since the development time is much shorter. There is a very good principle
to follow when trying to get an efficient implementation : write two versions,
throw the first one away. With Forth it is quite easy to implement an
application one way, decide how it should be, and then do it over. You will
still beat the more traditional design philosophy hands-down. Most other
languages and operating systems don't give you the time and energy to make
multiple implementations of an application before your deadline.

>Forth has other advantages, as has been mentioned, principally its
>efficiency, ability to run on small machines, and ability to do low-level
>I/O. However, I think that its primary advantage is its programming
>environment. Of course, Lisp programming environments have provided
>these features for nearly two decades, but have only become generally
>available fairly recently. Scheme compilers have the potential to produce
>extremely efficient code, and Lisp-machine Lisp is used to write the
>entire operating system. As memory becomes cheaper, there will be no
>good reason not to use these other languages instead of Forth, and the
>many, many disadvantages of Forth will kill it.

I really enjoy LISP. There's nothing wrong with it; but, I'd never want to
write real-time software in it and I'd never want to port the langauge to
another computer. I've never seen a standard implementation of LISP that
deals with the low-level environment of the machine it runs on. With
Forth, you can reach the low-level, high-performance end of the machine
and still have a self-contained environment that is stand-alone. No
operating system needed, no monitor ROM - Forth is ROMmable and
self-contained. On the other hand, Forth can be made to run under any
operating system, if desired.

Think Big, Build Small, Port Everywhere - - -

Control the World with Forth.

far...@hoptoad.uucp

unread,
Jul 4, 1986, 10:21:22 PM7/4/86
to
In article <1...@cci632.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
[article excerpted for later comment]
[1]
> ... pull out the old "recursive decompiler"
>and "descriptive text cross reference tool" and expand those deep nested
>routines into a pretty "structured listing", which includes the whole
>system.
>
>An even better trick is to draw a roadmap before, during or after coding,
>and include it in the functional spec.
[2]
>Unfortunately, many people are afraid of new tools that require changes in
>the way one thinks.
[3]
>The weakness of forth is not documentation or structure, but the lack
>of axiomatic organization. It requires a good associative data base,
>either human or computer. If this is understood and planned, it can
>be quite a simple matter to sell this type of system.

[3] - It is precisely the lack of axiomatic organization which makes Forth
nearly unusable in an environment where code must be generated quickly,
in concert with other programmers, and in a fashion which must be maintained
by programmers other than the original programmers. I have watched many
Forth programmers at work, and found *NONE* who went about the job of prog-
ramming in an organized fashion. Much more likely was the "hack" method -
develop a routine which does a specific, fairly low-level, job, and then
cut, bash and squeeze to fit it into the larger structure. It takes a very
organized person to be able to keep the entire organization of a large soft-
ware project in his/her head, from top to bottom, and although Forth does
not force this type of conceptualization, the majority of Forth programmers
I have worked with seem to work this way.

[2] - ALL new tools require changes in the way one thinks. It is simply a
matter of deciding whether a major change in one's thought is valuable,
or will create more difficulty than it will solve. My feeling is that there
isn't enough benefit to "thinking Forth" to justify relearning 15 years
worth of lessons. Show me that there is, and I may change my mind...

[1] - One of the problems I have with Forth is that it requires external
guides before complex programs become comprehensible to someone unfamiliar
with those programs. It shares with assembly language the drawback that the
low-level modules are extremely primitive, and, in turn, are often used in
extremely labyrinthian ways to accomplish a higher-level goal. This does
nothing to help understanding. When you add in the very peculiar structure,
syntax, and notation, you have a software system that, in every complex
example I have seen, is cryptic to the point of illegibility. I have been
able to take quite complex C programs, and in a matter of a week or two, with
no help from the original authors, been able to port them to entirely different
systems with minimal problems. The only time I tried this trick with a Forth
program (and a small one at that, originally programmed by someone I was
assured was a very good Forth programmer) I worked for a month before finally
deciding to scrap the whole project. (BTW - this involved porting to a
machine with an order from management - "No Forth!". We finally decided to
do a functional equivalent to the program - management decided to hell with
the whole idea.)

----------------
Mike Farren
hoptoad!farren

r...@cci632.uucp

unread,
Jul 14, 1986, 11:23:29 AM7/14/86
to
In article <8...@hoptoad.uucp> far...@hoptoad.UUCP (Mike Farren) writes:
>In article <1...@cci632.UUCP> r...@ccird1.UUCP (Rex Ballard) writes:
>[article excerpted for later comment]
>[1]
>> ... pull out the old "recursive decompiler"
>>and "descriptive text cross reference tool" and expand those deep nested
>>routines into a pretty "structured listing", which includes the whole
>>system.
>>
>>An even better trick is to draw a roadmap before, during or after coding,
>>and include it in the functional spec.
>[2]
>>Unfortunately, many people are afraid of new tools that require changes in
>>the way one thinks.
>[3]
>>The weakness of forth is not documentation or structure, but the lack
>>of axiomatic organization. It requires a good associative data base,
>>either human or computer. If this is understood and planned, it can
>>be quite a simple matter to sell this type of system.
>
>[3] - It is precisely the lack of axiomatic organization which makes Forth
>nearly unusable in an environment where code must be generated quickly,
>in concert with other programmers, and in a fashion which must be maintained
>by programmers other than the original programmers.

This is one I will gladly concede, in fact the most touted advantage of
Forth is that one programmer can build a complex system in a few weeks
or months (this is more a function of coding style and environment than
language syntax), there is no guarentee however that anyone else will
understand the quirks of the original programmer :-).

>[2] - ALL new tools require changes in the way one thinks. It is simply a
>matter of deciding whether a major change in one's thought is valuable,
>or will create more difficulty than it will solve. My feeling is that there
>isn't enough benefit to "thinking Forth" to justify relearning 15 years
>worth of lessons. Show me that there is, and I may change my mind...

The reason this came up in net.arch in the first place is because most
programmers spend 5-15 years thinking in terms of large, flat routines.
When these routines are broken into their smallest componant parts,
they end up being useful "extensions" or "primitives. Anyone can benefit
from habits learned by using forth, reguardless of the actual language
they use. The other feature of "screen sized subroutines" is that
in RISC archetectures with good fast caches, these primitives become
self-optimizing micro-code.

>[1] - One of the problems I have with Forth is that it requires external
>guides before complex programs become comprehensible to someone unfamiliar
>with those programs. It shares with assembly language the drawback that the
>low-level modules are extremely primitive, and, in turn, are often used in
>extremely labyrinthian ways to accomplish a higher-level goal.

There are two issues here. If the primitives are "axiomatic" (you intuitively
know what they do, and what results to expect), they can be more useful than
a "compiled language" in which all primitives are limited to a specific
syntax. Unfortunately, Forth primitives are not as axiomatic as they could
be. Here, Smalltalk has shown to be a definite improvement over Forth.

>This does
>nothing to help understanding. When you add in the very peculiar structure,
>syntax, and notation, you have a software system that, in every complex
>example I have seen, is cryptic to the point of illegibility.

In Forth, as in C, coding styles can vary from well structured to "totally
obfuscated". Look at some of the winners of the "obfuscated C contest" :-).
Many forth packages are deliberately obfuscated just before release.
The most common technique is the "white space squisher". Seeing forth with
no block structuring can be quite distressing. This is not the "normal"
form or style.

>I have been
>able to take quite complex C programs, and in a matter of a week or two, with
>no help from the original authors, been able to port them to entirely different
>systems with minimal problems. The only time I tried this trick with a Forth
>program (and a small one at that, originally programmed by someone I was
>assured was a very good Forth programmer) I worked for a month before finally
>deciding to scrap the whole project. (BTW - this involved porting to a
>machine with an order from management - "No Forth!". We finally decided to
>do a functional equivalent to the program - management decided to hell with
>the whole idea.)

Ironically, I had just the reverse experience. My first introduction to
C was Ron Cain's compiler, followed immediately by V6 Unix. My previous
experience was in Forth, and these 20 page definitions, pointers to functions
returning pointers to ints, and some of the other "finesse" tricks of C made
my first impressions of C less than great. Fortunately, I got over this
initial shock and C has been a very useful language ever since. However,
I still try to stick with "screen sized" functions, build libraries and a
main-line, rather than programs with a few subroutines.

I won't even try to convince anyone they should convert all of their
production code to Forth. There are too many advantages to the C/Unix
environment such as lint, indent, cb, cflow, ..... that make C easier
to document and analyse. What I do reccomend, is examining general
style of programming. Do you use an in-line copy rather than a macro
or a subroutine. If you could use a subroutine at little or no cost,
would you rather use a macro or a subroutine? If the answer to the
last question is yes, then it is actually possible to build an archetecture
that makes subroutines containing loops run FASTER than an in-line expanded
loop.

To be honest, if there were a GOOD C interpreter or incremental compiler/
editor that supported FULL C, available in source form for 4.2 BSD, I would
really like to know! It is the only part of Forth that I actually miss :-).

>Mike Farren
>hoptoad!farren

Rex Ballard.

0 new messages