Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cases where Forth seems a little clunky

31 views
Skip to first unread message

vand...@gmail.com

unread,
May 9, 2007, 10:44:56 AM5/9/07
to
I was fiddling with a little bit of code, and wondered why things
which
seemed like they should be simple, seemed rather tedious in Forth.
The
result of this is my attempt to code up bcopy(), and see what it is
about
Forth which makes some kinds of programming more demanding than
expected.

The definition I came up with is:

: bcopy ( src dest count -- ) -rot swap rot
0 ?do ( dest src ) 2dup c@ swap c! char+ swap char+ swap loop
2drop ;

By comparison, here's what I got in C:

void
bcopy(char *src, char *dest, int count)
{
while (count--) *dest++ = *src++;
}

At the heart of it, it seems like as you move to two things being
actively
used, and definitely by the time you reach three, names seem to be a
better
way to access values than stack positions. Even in this example,
where I
shuttled the count into the do..loop construct, I still have to deal
with
multiple values: the pointers, the value moving from one pointer into
the
other, and then the advance of the pointers. I guess I could have
factored
out the inner body of the loop:

: c@c! ( src dest -- src+ dest+ )
2dup c@ swap c! char+ swap char+ swap ;
: bcopy ( src dest count -- ) -rot swap rot
0 ?do ( dest src ) c@c! loop 2drop ;

But it still seems like C's idioms and naming give one a clearer and
more
compact way of "making it happen".

As a further experiment, I thought I'd try my hand at the same thing
in
SEAforth. This is more like stack machine code than Forth, but FWIW,
here's
what I came up with:

dup dup dup xor ( src dest count count 0 )
xor if $2
push a! dup b!
1. swap nop nop ( S: 1 src; b = src, a = dest; R: count )
$1:
@b !a+ over +
dup b! next $1
drop drop ret nop
$2:
drop drop drop ret

Note that being able to use A as a register (i.e., a named location)
contributes significantly to the streamlining of the code. I could've
fit
the inner loop into a single instructino word and used their "micro
next",
except that the B register doesn't have any auto-increment facility
like A
does. Thus the 1 on the stack, and the need to use the ALU and re-
store B
on each iteration.

Finally, just for grins I looked at what GCC thought it could do with
the C
code when compiling for x86. The stack/register dichotomy costs it,
as does
the lack of auto-increment facilities. I suspect that the SEAforth
architecture would fall behind as the complexity of the algorithm
drove you
to populate more registers with values, but at least for bcopy() I'd
say
SEAforth has an edge on a traditional CISC instruction set.

bcopy:
pushl %ebx
movl 8(%esp), %ebx
movl 12(%esp), %ecx
movl 16(%esp), %edx
jmp .L10
.L12:
movb (%ebx), %al
movb %al, (%ecx)
incl %ebx
incl %ecx
.L10:
decl %edx
cmpl $-1, %edx
jne .L12
popl %ebx
ret

Regards,
Andy Valencia

Jonah Thomas

unread,
May 9, 2007, 12:58:59 PM5/9/07
to
On 9 May 2007 07:44:56 -0700
vand...@gmail.com wrote:

> I was fiddling with a little bit of code, and wondered why things
> which seemed like they should be simple, seemed rather tedious in

? Forth. The result of this is my attempt to code up bcopy(), and

> see what it is about Forth which makes some kinds of programming
> more demanding than expected.
>
> The definition I came up with is:
>
> : bcopy ( src dest count -- ) -rot swap rot
> 0 ?do ( dest src ) 2dup c@ swap c! char+ swap char+ swap loop
> 2drop ;

Here's a simple way:

: bcopy ( src dest count -- )

CHARS MOVE ;

But maybe what you're doing is implementing MOVE ?

Here's one possibility:

: MOVE ( src dest count -- )
0 ?DO
OVER I + C@ OVER I + C!
LOOP
2DROP ;

This still isn't nearly as short as the C version. But if you write MOVE
in assembler you can probably make it about as good as it can get for
your processor.

> By comparison, here's what I got in C:
>
> void
> bcopy(char *src, char *dest, int count)
> {
> while (count--) *dest++ = *src++;
> }

I thought I'd try my hand at the same thing in


> SEAforth. This is more like stack machine code than Forth, but FWIW,
> here's what I came up with:
>
> dup dup dup xor ( src dest count count 0 )
> xor if $2
> push a! dup b!
> 1. swap nop nop ( S: 1 src; b = src, a = dest; R: count )
> $1:
> @b !a+ over +
> dup b! next $1
> drop drop ret nop
> $2:
> drop drop drop ret
>
> Note that being able to use A as a register (i.e., a named location)
> contributes significantly to the streamlining of the code. I could've
> fit the inner loop into a single instructino word and used their
> "micro next", except that the B register doesn't have any
> auto-increment facility like A does. Thus the 1 on the stack, and the
> need to use the ALU and re- store B on each iteration.

I read about Chuck Moore doing this sort of thing using a
single-instruction word to increment the value in R@ . I liked the
concept he was talking about, but I didn't check the details of his
processor.

Andrew Haley

unread,
May 9, 2007, 1:16:44 PM5/9/07
to
vand...@gmail.com wrote:

> I was fiddling with a little bit of code, and wondered why things
> which seemed like they should be simple, seemed rather tedious in
> Forth. The result of this is my attempt to code up bcopy(), and see
> what it is about Forth which makes some kinds of programming more
> demanding than expected.

> The definition I came up with is:

> : bcopy ( src dest count -- ) -rot swap rot
> 0 ?do ( dest src ) 2dup c@ swap c! char+ swap char+ swap loop
> 2drop ;

Eww, that's horrid.

: bcopy ( s d n)
over + swap ?do dup c@ i c! loop drop ;

( Has an environmental dependency on chars being one address unit. So
sue me. :-)

> void
> bcopy(char *src, char *dest, int count)
> {
> while (count--) *dest++ = *src++;
> }

Andrew.

Andrew Haley

unread,
May 9, 2007, 2:09:24 PM5/9/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> wrote:
> vand...@gmail.com wrote:

>> I was fiddling with a little bit of code, and wondered why things
>> which seemed like they should be simple, seemed rather tedious in
>> Forth. The result of this is my attempt to code up bcopy(), and see
>> what it is about Forth which makes some kinds of programming more
>> demanding than expected.

>> The definition I came up with is:

>> : bcopy ( src dest count -- ) -rot swap rot
>> 0 ?do ( dest src ) 2dup c@ swap c! char+ swap char+ swap loop
>> 2drop ;

> Eww, that's horrid.

> : bcopy ( s d n)
> over + swap ?do dup c@ i c! loop drop ;

Oooh, no. A stupid typo. And I *promised* to test all the code I
posted. :-(

: bcopy ( s d n) over + swap ?do dup c@ i c! 1+ loop drop ;

Andrew.

Mikael Nordman

unread,
May 9, 2007, 2:12:12 PM5/9/07
to

> Eww, that's horrid.
>
> : bcopy ( s d n)
> over + swap ?do dup c@ i c! loop drop ;
>
You meant
: bcopy ( s d n -- )
over + swap ?do dup c@ i c! char+ loop drop ;

John Doty

unread,
May 9, 2007, 2:13:11 PM5/9/07
to
vand...@gmail.com wrote:
> I was fiddling with a little bit of code, and wondered why things
> which
> seemed like they should be simple, seemed rather tedious in Forth.

It's not so much Forth, as RPN. In arithmetic, the data flow is usually
LIFO, natural for RPN, but the address flow in array processing is
usually circular.

Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
In LSE64:

# n s d copy yields nothing
# copy n cells from s to d

variables: s d
(copy) : s @@+ d @!+
copy : d ! s ! (copy) iterate

*No* stack gymnastics! Typical klunky LSE style: variables, very short
factors. Except I think most LSE users would explicitly decrement and
test the count, using "repeat" for the loop instead of the exotic
"iterate". I'm lazy.

It isn't exactly bcopy because LSE64 doesn't do bytes out of the box,
but a sensible implementor of a byte module would implement indirect
autoincrement operators for bytes.

I believe you could easily define @@+ (fetch indirect and increment the
address cell) and @!+ (similar store) in any other Forth dialect and
work in this style. Often in Forth if it's clumsy you haven't
implemented the right factors. Step back, think, and implement them.

--
John Doty, Noqsi Aerospace, Ltd.
http://www.noqsi.com/
--
Specialization is for robots.

Marcel Hendrix

unread,
May 9, 2007, 3:29:58 PM5/9/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky
[..]

> Oooh, no. A stupid typo. And I *promised* to test all the code I
> posted. :-(

> : bcopy ( s d n) over + swap ?do dup c@ i c! 1+ loop drop ;

: bcopy ( s d n) bounds ?do count i c! loop drop ;

Using eForth64 (for Linux) this gives:

FORTH> : bcopy ( source destination size ) BOUNDS DO COUNT I C! LOOP DROP ; ok
FORTH> SEE bcopy
$005624B0: : bcopy ( -- )
$005624B0: BOUNDS
$005624B8: 2>R
$005624C0: BEGIN
$005624C8: COUNT
$005624D0: IC!
$005624D8: LOOP >$005624C8
$005624E8: DROP
$005624F0: ; ok
FORTH> COLON-FIX ( try auto-optimization )
'BOUNDS' is converted to a macro and inlined.
Current flags: $0000002020000006 ( n n -- n n ) => size = 22 bytes.
'2>R' is converted to a macro and inlined.
Current flags: $0000002000000043 COMPILE-ONLY ( n n -- ) => size = 19 bytes.
'COUNT' is converted to a macro and inlined.
Current flags: $0000001020000005 ( n -- n n ) => size = 18 bytes.
'IC!' is converted to a macro and inlined.
Current flags: $0000001000000003 ( n -- ) => size = 15 bytes.
'(loop)_nc' is converted to a macro and inlined.
Current flags: $00000009 ( -- ) => size = 28 bytes.
'DROP' is converted to a macro and inlined.
Current flags: $0000001000000004 ( n -- ) => size = 9 bytes. ok
FORTH> DIS bcopy
CODE bcopy MACRO ( -- ) \ size = 90
$005624F8 $POP2
$00562501 xchg rax, rbx
$00562503 add rbx, rax
$00562506 push rbx
$00562507 push rax
$00562508 mov rax, [rbp 0 +] qword
$0056250C lea rbp, [rbp 8 +] qword

$00562510 movzx rbx, [rax 0 +] byte
$00562514 inc rax
$00562517 xchg rax, rbx
$00562519 lea rbp, [rbp -8 +] qword
$0056251D mov [rbp 0 +] qword, rbx
$00562521 mov rbx, [rsp 0 +] qword
$00562525 mov [rbx 0 +] byte, al
$00562527 mov rax, [rbp 0 +] qword
$0056252B lea rbp, [rbp 8 +] qword

$0056252F add [rsp 0 +] qword, 1 b#
$00562534 mov rbx, [rsp 0 +] qword
$00562538 cmp [rsp 8 +] qword, rbx
$0056253D cs:cs:ja $00562510 offset NEAR
$00562545 lea rsp, [rsp #16 +] qword
$0056254A mov rax, [rbp 0 +] qword
$0056254E lea rbp, [rbp 8 +] qword
$00562552 $PUSH1
FORTH>

Not too bad, already, but it will get better in the final release.

-marcel

Andrew Haley

unread,
May 9, 2007, 3:49:56 PM5/9/07
to
John Doty <j...@whispertel.losetheh.net> wrote:

> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
> In LSE64:

> # n s d copy yields nothing
> # copy n cells from s to d

> variables: s d
> (copy) : s @@+ d @!+
> copy : d ! s ! (copy) iterate

Lemme guess: you don't have zero-length strings and you don't have
threads.

Andrew.

Andreas Kochenburger

unread,
May 9, 2007, 3:56:18 PM5/9/07
to
John Doty wrote:
> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
> In LSE64:
>
> # n s d copy yields nothing
> # copy n cells from s to d
>
> variables: s d
> (copy) : s @@+ d @!+
> copy : d ! s ! (copy) iterate
>
> *No* stack gymnastics! Typical klunky LSE style: variables, very short
> factors. Except I think most LSE users would explicitly decrement and
> test the count, using "repeat" for the loop instead of the exotic
> "iterate".

It's an interesting example, however, to me it seems to replace the
complexity of stack shuffling by the complexity of exotic operators. A
KISS solution has the charm of reflecting the natural motion when you
move a row of stones from one place to another. Think like a child:

: COPY { src dest count -- }
count 0 ?do
src @ dest !
1 +to src
1 +to dest
loop ;


----
NB: The following code works in my Forth, but is not standard-compliant:

\ Alternative COPY

: (++) \ ( pfa -- ) increment VALUE parameter
dup @ 1+ swap ! ;

: ++ \ ( 'local' -- ) compile increment of locals
' >body \ <- ticking locals is not allowed in ANS Forth
postpone literal postpone (++) ; IMMEDIATE

: MYCOPY { sourc dest num -- }
num 0 ?DO
sourc @ dest !
++ sourc
++ dest
LOOP ;

: TEST
c" Honey" c" Sugar"
cr ." Before: " over count type bl emit dup count type
over count drop over count mycopy
cr ." After: " swap count type bl emit count type ;

TEST

----
The following definition on ++ doesn't work either:

: ++ \ ( 'local' -- ) compile increment of locals
bl parse 2dup evaluate s" 1+ to" evaluate evaluate ; IMMEDIATE
\ does not work because input source changed

I have to stop now. But what would be an elegant and short
standard-compliant definition of ++ ?


Andreas

Frank Buss

unread,
May 9, 2007, 3:57:40 PM5/9/07
to
John Doty wrote:

> I believe you could easily define @@+ (fetch indirect and increment the
> address cell) and @!+ (similar store) in any other Forth dialect and
> work in this style. Often in Forth if it's clumsy you haven't
> implemented the right factors. Step back, think, and implement them.

I think Marcel's solutions looks very good and most like Forth, especially
BOUNDS and the clever usage of COUNT. But of course, you can do it in Forth
like in C :-)

: c@++ ( var -- value )
>r r@ @ dup c@ swap 1+ r> !
;

: c!++ ( value var -- )
>r r@ @ c! r> dup @ 1+ swap !
;

: -- ( var -- value )
dup @ dup >r 1- swap ! r>
;

variable src
variable dest
variable count
: bcopy
count ! dest ! src !
begin count -- while src c@++ dest c!++ repeat
;

--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

John Doty

unread,
May 9, 2007, 4:06:22 PM5/9/07
to
Andrew Haley wrote:
> John Doty <j...@whispertel.losetheh.net> wrote:
>
>> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
>> In LSE64:
>
>> # n s d copy yields nothing
>> # copy n cells from s to d
>
>> variables: s d
>> (copy) : s @@+ d @!+
>> copy : d ! s ! (copy) iterate
>
> Lemme guess: you don't have zero-length strings

0 x iterate

does x zero times. Yes, it's magic ;-)

> and you don't have
> threads.

No threads. There are better ways to achieve concurrency.

Andrew Haley

unread,
May 9, 2007, 4:16:19 PM5/9/07
to
John Doty <j...@whispertel.losetheh.net> wrote:
> Andrew Haley wrote:
>> John Doty <j...@whispertel.losetheh.net> wrote:
>>
>>> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
>>> In LSE64:
>>
>>> # n s d copy yields nothing
>>> # copy n cells from s to d
>>
>>> variables: s d
>>> (copy) : s @@+ d @!+
>>> copy : d ! s ! (copy) iterate
>>
>> Lemme guess: you don't have zero-length strings

> 0 x iterate

> does x zero times. Yes, it's magic ;-)

Ah, I see, this must be part of LSE64's famed linguistic simplicity.

>> and you don't have threads.

> No threads. There are better ways to achieve concurrency.

But not, apparently, the sort of concurrency that allows processes to
share code and data.

Andrew.

John Doty

unread,
May 9, 2007, 4:59:33 PM5/9/07
to
Andrew Haley wrote:
> John Doty <j...@whispertel.losetheh.net> wrote:
>> Andrew Haley wrote:
>>> John Doty <j...@whispertel.losetheh.net> wrote:
>>>
>>>> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
>>>> In LSE64:
>>>> # n s d copy yields nothing
>>>> # copy n cells from s to d
>>>> variables: s d
>>>> (copy) : s @@+ d @!+
>>>> copy : d ! s ! (copy) iterate
>>> Lemme guess: you don't have zero-length strings
>
>> 0 x iterate
>
>> does x zero times. Yes, it's magic ;-)
>
> Ah, I see, this must be part of LSE64's famed linguistic simplicity.

1. *I* think it is *linguistically* simple. It's only complicated if you
insist on imagining a traditional Forth implementation. However,

2. Most users prefer "repeat" and explicit counter arithmetic, so maybe
it should go away.

>
>>> and you don't have threads.
>
>> No threads. There are better ways to achieve concurrency.
>
> But not, apparently, the sort of concurrency that allows processes to
> share code and data.

I'm a big believer in message passing. It *really* worked well (in C) on
the HETE-2 space mission. From a system engineer's point of view it's a
much better way to cleanly separate things. If you want to share big
blocks of memory efficiently, do it Mach style: send them by reference,
copy-on-write (assuming you have an MMU that can do this).

In HETE-2 we cheated just a little on the paradigm in the
control/communication processor where we had a large collection of
essentially independent status scalars that several processes needed to
inspect. There, we had a one writer, several reader shared block. No
mutexes or anything like that: just atomic reads/writes of the
variables. That saved passing a lot of tiny messages around. A process
that needed to know, say, the temperature of battery 1C, didn't really
care if the measurement happened 10 milliseconds or 10 seconds earlier.

But just piping output to another program is as tricky as LSE64
concurrency has gotten. It's for keeping simple jobs simple, not
supporting every possible complexity.

John Doty

unread,
May 9, 2007, 7:29:24 PM5/9/07
to
Andreas Kochenburger wrote:

> It's an interesting example, however, to me it seems to replace the
> complexity of stack shuffling by the complexity of exotic operators. A
> KISS solution has the charm of reflecting the natural motion when you
> move a row of stones from one place to another. Think like a child:
>
> : COPY { src dest count -- }
> count 0 ?do
> src @ dest !
> 1 +to src
> 1 +to dest
> loop ;

All kinds of exotica here: ?do, +to, locals, block structure...

Coos Haak

unread,
May 9, 2007, 7:49:35 PM5/9/07
to
Op Wed, 09 May 2007 21:56:18 +0200 schreef Andreas Kochenburger:

> NB: The following code works in my Forth, but is not standard-compliant:
>
> \ Alternative COPY
>
>: (++) \ ( pfa -- ) increment VALUE parameter
> dup @ 1+ swap ! ;

Why not use +! ?
: (++) 1 swap +! ;

>: ++ \ ( 'local' -- ) compile increment of locals
> ' >body \ <- ticking locals is not allowed in ANS Forth
> postpone literal postpone (++) ; IMMEDIATE

Some Forths don't even allow to get the address of locals ;-)
But some have +TO

>: MYCOPY { sourc dest num -- }
> num 0 ?DO
> sourc @ dest !
> ++ sourc
> ++ dest
> LOOP ;
>
>: TEST
> c" Honey" c" Sugar"
> cr ." Before: " over count type bl emit dup count type
> over count drop over count mycopy
> cr ." After: " swap count type bl emit count type ;

Why not use S"-strings, they are much more easy to handle, the length is
aready on the stack, COUNT is superfluous.

> TEST
>
> ----
> The following definition on ++ doesn't work either:
>
>: ++ \ ( 'local' -- ) compile increment of locals
> bl parse 2dup evaluate s" 1+ to" evaluate evaluate ; IMMEDIATE
> \ does not work because input source changed
>
> I have to stop now. But what would be an elegant and short
> standard-compliant definition of ++ ?

There isn't one, standard that is.

--
Coos

CHForth, 16 bit DOS applications
http://home.hccnet.nl/j.j.haak/forth.html

sl...@jedit.org

unread,
May 9, 2007, 8:07:34 PM5/9/07
to
On May 9, 10:44 am, vandy...@gmail.com wrote:
> By comparison, here's what I got in C:
>
> void
> bcopy(char *src, char *dest, int count)
> {
> while (count--) *dest++ = *src++;
>
> }

Except nobody implements bcopy() this way. First of all, according to
POSIX, bcopy() must handle the case where the source and destination
overlap. Your version does not. Second, copying data a byte at a time
is not a good idea on modern processors. At the very least, you want
to copy a word at a time, doing at most a few byte copies at the start
and end if 'src', 'dst' or 'count' are not multiples of the word size.
Even better, you use the vector unit of your CPU to copy larger
chunks, for example 128 bits at a time. For example, the Mac OS X
bcopy() (and similar libc routines) use AltiVec on PowerPC G4 and G5
CPUs to achieve a significant speedup over the scalar version. So
right there, a real bcopy() needs some rather involved logic to handle
overlaps, a fast code path for aligned word (or vector-) sized copies,
and very likely, a ton of inline assembly (conditionalized per CPU, of
course). That's at least 30 lines of code, even if you're only
supporting one processor type. Sorry, but bcopy() is not a one-liner.

Slava

Coos Haak

unread,
May 9, 2007, 8:11:09 PM5/9/07
to
Op 9 May 2007 17:07:34 -0700 schreef sl...@jedit.org:

Seems you are describing ANS Forth MOVE ;-)

Andreas Kochenburger

unread,
May 10, 2007, 2:31:55 AM5/10/07
to
"John Doty" <j...@whispertel.LoseTheH.net> schrieb im Newsbeitrag
news:M-ydnZjcSLPJxN_b...@wispertel.com...

> Andreas Kochenburger wrote:
>
>> It's an interesting example, however, to me it seems to replace the
>> complexity of stack shuffling by the complexity of exotic operators. A
>> KISS solution has the charm of reflecting the natural motion when you
>> move a row of stones from one place to another. Think like a child:
>>
>> : COPY { src dest count -- }
>> count 0 ?do
>> src @ dest !
>> 1 +to src
>> 1 +to dest
>> loop ;
>
> All kinds of exotica here: ?do, +to, locals, block structure...

Incidently yesterday evening I showed your first posting to my younger son
who has an advanced IT course at school. It took some while before he
understood your notation, and then just said "that's not good, it's ugly and
unnecessarily difficult".

Clunky.

Andreas


humptydumpty

unread,
May 10, 2007, 2:49:16 AM5/10/07
to
hi Andy,

let's describe by words c-code: dereference source pointer,
put value at destination pointer, increment both pointers.

so, the heart of c-code is to fetch a value and store it, let name it
c@! . the last word in definition of c@! must be a c! , so will
obtain the stack effect:

: c@! ( d s -- ) c@ swap c! ;

then, increment of both operators (is simmetric):

: ++ ( d s -- d+ s+ ) 1+ swap 1+ swap ;

it could be done using >R andR> .

then, put it into a loop:

: bcopy ( d s c -- ) 0 do 2dup c@! ++ loop 2drop ;

vand...@gmail.com wrote:
> I was fiddling with a little bit of code, and wondered why things
> which
> seemed like they should be simple, seemed rather tedious in Forth.

maybe that should be, forth as finer zoomer :)

> Regards,
> Andy Valencia

best regards,
humptydumpty

Andreas Kochenburger

unread,
May 10, 2007, 7:22:21 AM5/10/07
to

"Coos Haak" <chf...@hccnet.nl> schrieb im Newsbeitrag
news:1swhynbwak6zm.fsf1447cds65$.dlg@40tude.net...

> Op Wed, 09 May 2007 21:56:18 +0200 schreef Andreas Kochenburger:
> Why not use S"-strings, they are much more easy to handle, the length is
> aready on the stack, COUNT is superfluous.

Because I usually use "strings" :-)

>>: ++ \ ( 'local' -- ) compile increment of locals
>> bl parse 2dup evaluate s" 1+ to" evaluate evaluate ; IMMEDIATE
>> \ does not work because input source changed

>> But what would be an elegant and short standard-compliant definition of
++ ?
> There isn't one, standard that is.

Evaluating macros in strings would be compliant. However it requires some
intermediate string construction to build
"<local> 1+ TO <local" EVALUATE
But it looks clumsy.

Andreas


John Doty

unread,
May 10, 2007, 9:19:23 AM5/10/07
to

LSE64 certainly doesn't address the idolatries of "advanced IT". The
target audience has different concerns. LSE64 is mostly not my notation:
based on Bob Goeke's LSE it's mostly a systems engineer's idea of what a
programming language could be.

pablo reda

unread,
May 10, 2007, 9:40:52 AM5/10/07
to
I have a diferent control structures, because my particular pourposes
(I not have interactive forth)
in r4 bcopy is

:bcopy | src dest count --
( 1? )( 1- rot c@+ rot c!+ rot ) 3drop ;

Pablo

Jean-François Michaud

unread,
May 10, 2007, 10:56:35 AM5/10/07
to
On May 9, 11:49 pm, humptydumpty <ouat...@yahoo.com> wrote:
> hi Andy,
>
> let's describe by words c-code: dereference source pointer,
> put value at destination pointer, increment both pointers.
>
> so, the heart of c-code is to fetch a value and store it, let name it
> c@! . the last word in definition of c@! must be a c! , so will
> obtain the stack effect:
>
> : c@! ( d s -- ) c@ swap c! ;
>
> then, increment of both operators (is simmetric):
>
> : ++ ( d s -- d+ s+ ) 1+ swap 1+ swap ;
>
> it could be done using >R andR> .
>
> then, put it into a loop:
>
> : bcopy ( d s c -- ) 0 do 2dup c@! ++ loop 2drop ;
>
> vandy...@gmail.com wrote:
> > I was fiddling with a little bit of code, and wondered why things
> > which
> > seemed like they should be simple, seemed rather tedious in Forth.
>
> maybe that should be, forth as finer zoomer :)
>
> > Regards,
> > Andy Valencia
>
> best regards,
> humptydumpty

This is my favorite definition so far :). In the true spirit of Forth
I would say. The facrotizations are small useful bits.

Regards
Jean-Francois Michaud

Andrew Haley

unread,
May 10, 2007, 11:14:04 AM5/10/07
to
John Doty <j...@whispertel.losetheh.net> wrote:
> Andrew Haley wrote:
>> John Doty <j...@whispertel.losetheh.net> wrote:
>>> Andrew Haley wrote:
>>>> John Doty <j...@whispertel.losetheh.net> wrote:
>>>>
>>>>> Bob Goeke's fix for this in LSE was indirect autoincrement addressing.
>>>>> In LSE64:
>>>>> # n s d copy yields nothing
>>>>> # copy n cells from s to d
>>>>> variables: s d
>>>>> (copy) : s @@+ d @!+
>>>>> copy : d ! s ! (copy) iterate

>>>> ... you don't have threads.


>>
>>> No threads. There are better ways to achieve concurrency.
>>
>> But not, apparently, the sort of concurrency that allows processes to
>> share code and data.

> I'm a big believer in message passing.

Sure, me too. But there's also a great deal to be said for re-entrant
code, even if you're doing all of your inter-process communication via
messages. I guess the code above could be re-entrant if every process
had its own copy of every variable.

> But just piping output to another program is as tricky as LSE64
> concurrency has gotten. It's for keeping simple jobs simple, not
> supporting every possible complexity.

Fair enough, that's perfectly reasonable.

Andrew.

Jean-François Michaud

unread,
May 10, 2007, 11:19:29 AM5/10/07
to
On May 9, 11:31 pm, "Andreas Kochenburger" <a...@nospam.org> wrote:
> "John Doty" <j...@whispertel.LoseTheH.net> schrieb im Newsbeitragnews:M-ydnZjcSLPJxN_b...@wispertel.com...

That doesn't mean much Andreas, especially if it's his first contact
with Forth derived languages under RPN.

Also, it's not ugly, it's convoluted. It took me about 10 second to
understand (I had a headstart though, I'm familiar with Forthlike
languages and RPN processing).

I'm also somewhat fresh out of school (CS) and I have a different
perspective on the subject. The notation is pretty straightforward.
What your guy had trouble with is probably the exotic nature of Forth
and RPN all at the same time. Being used to seeing taught language
which aren't RPN by nature and not having to think about the stack is
the norm in high level languages. Languages are now designed so that
coders don't have to think about the hardware at all. It still makes
me laugh thinking that a VERY good, advanced Java coder I know
completely tripped out when I Hex dumped an XML file to look at some
problems we were having.

In it's context, I find John's definition rather elegant, simple,
although I don't personally like using variables, I can see it making
this exercise very simple.

I like HumptyDumpty's decomposition the most. His factors are simple
and useful. Everything is straightforward and clean to understand at a
glance; no effort required. Truthfully, it took me about 3 seconds for
that one (I'm lying a bit, I double checked the loop more thoroughly a
second time for an additional 2 seconds ;-)).

Regards
Jean-Francois Michaud

Message has been deleted
Message has been deleted

Andreas Kochenburger

unread,
May 10, 2007, 2:21:58 PM5/10/07
to

Thanks for your opinions. But my son knows the principles of RPN and
Forth, and at school they even tried a bit of Prolog.

Personally I have learnt to have a healthy respect for straightforward
and easy-to-read coding. Optimisations to save a few bytes or
nanoseconds at the expense of code readability are nearly always
counterproductive. Ever spent many hours to hunt down a _silly_ code
bug? Ever stumbled over your own "cleverness"?

The topic of this thread is: Forth seems clunky. As a response some guys
here showed some of their toys: technically brilliant but IMO clunky in
their own nature. My boy's remark just reminded me of the Grimm's tale
of the Emperor's new clothes.

I don't mean to be disrespectful btw to the proponents (particularly for
eforth64)

Andreas

humptydumpty

unread,
May 10, 2007, 3:17:31 PM5/10/07
to
hi all,
i know is kind of silly to reply to my post,
but is some kind of funny ++ def.:

humptydumpty wrote:

> then, increment of both operators (is simmetric):
>
> : ++ ( d s -- d+ s+ ) 1+ swap 1+ swap ;
>

or maybe:

: ++ ( d s -- d+ s+ ) 1 1 D+ ;

best regards,
humptydumpty

Coos Haak

unread,
May 10, 2007, 3:21:38 PM5/10/07
to
Op 10 May 2007 12:17:31 -0700 schreef humptydumpty:

> or maybe:
>
>: ++ ( d s -- d+ s+ ) 1 1 D+ ;
>

Then, what would -10 -20 30 bcopy do?
-10 -20 30 move would have no problem with this.

humptydumpty

unread,
May 11, 2007, 3:09:18 AM5/11/07
to
hi Coos,

Coos Haak wrote:

> Then, what would -10 -20 30 bcopy do?
> -10 -20 30 move would have no problem with this.

'move' have stack description ( src dst cnt -- )
'bcopy' (my version) has ( dst src cnt -- )
so,
-10 -20 30 move
should be
-20 -10 30 bcopy

and of course, source range overlaps with destination range,
and 'bcopy' destroy the source, but destination will be ok.

if we have '-20 -10 30 move' problem, then 'bcopy' is helpless.
but it could be solved, if we translate problem to
src=-20+30, dest=-10+30, and 'bcopy' will have to step back with
'--' .

so, to be an equivalent of 'move', we need a 'bcopy+' that use
'++', a 'bcopy-' that use '--', and an alternative construct to
decide what to use, or maybe a refactored version of both
+ - versions.

actually, forth-version just mimick c-version of 'bcopy',
that was the quest in this thread.

if i'll have to use memory-block copy, i think i'll use 'move',
if i'm not sure about overlapping and if it is available. :)

excuse my ignorance, what kind of hardware use
negative memory adresses?

> Coos
> CHForth, 16 bit DOS applications
> http://home.hccnet.nl/j.j.haak/forth.html

best regards,
humptydumpty
P.S. i follow the link and i see a computer-driven xylophone by
unix? +kde laptop :P . but the green icons are for 'python'
modules?

Andrew Haley

unread,
May 11, 2007, 4:59:03 AM5/11/07
to
Coos Haak <chf...@hccnet.nl> wrote:
> Op 10 May 2007 12:17:31 -0700 schreef humptydumpty:

>> or maybe:
>>
>>: ++ ( d s -- d+ s+ ) 1 1 D+ ;
>>
> Then, what would -10 -20 30 bcopy do?

Nothing useful. But then, I'm hard-pressed to think of any system on
which a move straddling address 0 would be useful.

One rather curious omission in Standard Forth is that there is no null
address: in theory any address may be valid. In practice, people
still use 0 as a null, of course.

Andrew.

Marcel Hendrix

unread,
May 11, 2007, 1:22:13 PM5/11/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky
[..]

> Nothing useful. But then, I'm hard-pressed to think of any system on
> which a move straddling address 0 would be useful.

It would be useful on a (maxed-out) transputer, which, as you undoubtedly
remember, has signed addresses. I made tForth test for HERE being 0.

-marcel

Coos Haak

unread,
May 11, 2007, 2:17:45 PM5/11/07
to
Op 11 May 2007 00:09:18 -0700 schreef humptydumpty:

> hi Coos,


>
> P.S. i follow the link and i see a computer-driven xylophone by
> unix? +kde laptop :P . but the green icons are for 'python'
> modules?

We call them 'tingel-tangels' (there are two of them) and because of the
metal, metallophones.
Originally they were driven by iForth (Marcel Hendrix).
Albert (red shirt) reworked the software to run on his ciforth.
The icons are for score-files, containing music in ASCII form the Forth
translates in commands for the tingel-tangels.
Simple hardware: one clock bit, one data bit and one strobe bit.
Cable connected to the parallel port. Using shift registers, with the
strobe bit whole accords are played.
See for more Albert's site.

--

Albert van der Horst

unread,
May 11, 2007, 2:07:00 PM5/11/07
to
In article <1178867358.7...@u30g2000hsc.googlegroups.com>,

humptydumpty <oua...@yahoo.com> wrote:
>
>excuse my ignorance, what kind of hardware use
>negative memory adresses?

Transputers have a signed address range, such that 0
can come out as a normal address.

In general as addrese (more or less) unsigned, negative numbers wrap
around on most processors.
HEX -10 is more comfortable that 0F....F0.

Groetjes Albert

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

Eberhard Roloff

unread,
May 11, 2007, 2:51:03 PM5/11/07
to comp.la...@ada-france.org
Coos Haak wrote:
> Op 11 May 2007 00:09:18 -0700 schreef humptydumpty:
>
>> hi Coos,
>>
>> P.S. i follow the link and i see a computer-driven xylophone by
>> unix? +kde laptop :P . but the green icons are for 'python'
>> modules?
>
> We call them 'tingel-tangels' (there are two of them) and because of the
> metal, metallophones.
> Originally they were driven by iForth (Marcel Hendrix).
> Albert (red shirt) reworked the software to run on his ciforth.
> The icons are for score-files, containing music in ASCII form the Forth
> translates in commands for the tingel-tangels.
> Simple hardware: one clock bit, one data bit and one strobe bit.
> Cable connected to the parallel port. Using shift registers, with the
> strobe bit whole accords are played.
> See for more Albert's site.
>
And Albert's site has which URL?

many thanks+kind regards
Eberhard

Coos Haak

unread,
May 11, 2007, 2:59:18 PM5/11/07
to
Op Fri, 11 May 2007 20:51:03 +0200 schreef Eberhard Roloff:

> Coos Haak wrote:
>> See for more Albert's site.
>>
> And Albert's site has which URL?
>
> many thanks+kind regards
> Eberhard

He just posted another reply ;-)
http://home.hccnet.nl/a.w.m.van.der.horst

Marcel Hendrix

unread,
May 11, 2007, 3:24:27 PM5/11/07
to
Coos Haak <chf...@hccnet.nl> wrote Re: Cases where Forth seems a little clunky
[..]

> We call them 'tingel-tangels' (there are two of them) and because of the
> metal, metallophones.

Our German FIG colleagues made a (low-res) video of it so that you
now can both hear and see it:

http://www.forth-ev.de/staticpages/movies/tingeltangel2004.avi

-marcel

humptydumpty

unread,
May 11, 2007, 3:51:06 PM5/11/07
to
thank you Marcel, the clip is very cool&funny ! :))

> Our German FIG colleagues made a (low-res) video of it so that you
> now can both hear and see it:
> http://www.forth-ev.de/staticpages/movies/tingeltangel2004.avi

> -marcel

best regards,
humptydumpty

Bruce McFarling

unread,
May 11, 2007, 4:10:46 PM5/11/07
to
On May 11, 4:59 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Nothing useful. But then, I'm hard-pressed to think of any system on
> which a move straddling address 0 would be useful.

And of course while straddling zero is well defined in any given
implementation, it is not defined in the scope of the standard ... the
standard allows for sign-magnitude and ones-complement ... and of
course, the far more common situation, the standard is compatible with
an cell width 16 bits or wider.

The unsigned number associated with -10 is only defined within the
same negative family and, more importantly 8-)# with a set cell width.

> One rather curious omission in Standard Forth is that there is no null
> address: in theory any address may be valid. In practice, people
> still use 0 as a null, of course.

Its not an omission, its a consequence of the criteria for setting out
the standard. Its perfectly possible for 0 to be a valid data
address ... say, with separate code and dataspaces in a 16bit forth on
an 8086, or a 65816.

humptydumpty

unread,
May 11, 2007, 5:04:26 PM5/11/07
to
hi Coos,

enjoyed much 'tingel-tangel' video - clip :)
and visited manx page of Albert. (and now i know
how he looks :) )

in the second definition of '++' , on walking
through 0 'bcopy' will jump a byte and finally
will corrupt memory.

> CHForth, 16 bit DOS applications
> http://home.hccnet.nl/j.j.haak/forth.html

best regards,
humptydumpty

Albert van der Horst

unread,
May 11, 2007, 7:38:32 PM5/11/07
to
>hi Coos,
<SNIP>

>P.S. i follow the link and i see a computer-driven xylophone by
>unix? +kde laptop :P . but the green icons are for 'python'
>modules?

The icon is a picture of the metallophone.
It was specifically designed for the score files that
are Forth code, but (programming by extending the language)
represent a musical score. Right click on the score, and you
can edit it. Left click on the score and it plays, using
turn key manx, a Forth with a musical interpreter on top.
The effect is the same as:

lina -a
"deurklop.sco" INCLUDED
deurklop
BYE

The console window is closed normally, but you can open it, interrupt
the song by pressing a key, then fool around with the score, such as
playing it on the speaker of the computer, play twice as fast, play
both parts on the silver metallophone, play 3 halftones higher etc.

In the same vein icons work on windows 98. But of course wina is not a
"real windows Forth". ;-)

Groetjes Albert.

Andrew Haley

unread,
May 12, 2007, 5:56:01 AM5/12/07
to
Marcel Hendrix <m...@iae.nl> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky
> [..]
>> Nothing useful. But then, I'm hard-pressed to think of any system on
>> which a move straddling address 0 would be useful.

> It would be useful on a (maxed-out) transputer, which, as you
> undoubtedly remember, has signed addresses.

Hmm. I don't think so, because a few words starting at address zero
are usually reserved for use as system pointers. Any move that
straddles address zero on a Transputer is almost certainly an error.

Andrew.

Andrew Haley

unread,
May 12, 2007, 6:02:48 AM5/12/07
to
Bruce McFarling <agi...@netscape.net> wrote:
> On May 11, 4:59 am, Andrew Haley <andre...@littlepinkcloud.invalid>
> wrote:
>> Nothing useful. But then, I'm hard-pressed to think of any system on
>> which a move straddling address 0 would be useful.

> And of course while straddling zero is well defined in any given
> implementation, it is not defined in the scope of the standard
> ... the standard allows for sign-magnitude and ones-complement
> ... and of course, the far more common situation, the standard is
> compatible with an cell width 16 bits or wider.

Sure, but I'm not sure of the relevance of cell width in this context.

> The unsigned number associated with -10 is only defined within the
> same negative family and, more importantly 8-)# with a set cell
> width.

>> One rather curious omission in Standard Forth is that there is no null
>> address: in theory any address may be valid. In practice, people
>> still use 0 as a null, of course.

> Its not an omission, its a consequence of the criteria for setting
> out the standard. Its perfectly possible for 0 to be a valid data
> address ... say, with separate code and dataspaces in a 16bit forth
> on an 8086, or a 65816.

As with UFOs, the question is not whether or not it is possible, but
whether or not it has ever actually happened. In this case, I don't
think it's likely. Sure, there may be a system pointer that resides
at address zero, but that is very different from HERE, ALLOCATE, or '
returning zero. Simply declaring that none of these words can ever
return zero would have simplified the writing of standard code, and I
suspect that most people handling lists just assume it anyway.

Andrew.

Anton Ertl

unread,
May 12, 2007, 7:17:23 AM5/12/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
> Sure, there may be a system pointer that resides
>at address zero, but that is very different from HERE, ALLOCATE, or '
>returning zero. Simply declaring that none of these words can ever
>return zero would have simplified the writing of standard code, and I
>suspect that most people handling lists just assume it anyway.

Yes, it seems very common practice to me, both on the system and on
the programmer side, so that's a good candidate for standardisation.
Any chance that you would volunteer for making the RfD?

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2007: http://www.complang.tuwien.ac.at/anton/euroforth2007/

Marcel Hendrix

unread,
May 12, 2007, 7:33:34 AM5/12/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky

> Marcel Hendrix <m...@iae.nl> wrote:
>> Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky
>> [..]

>> It would be useful on a (maxed-out) transputer, which, as you
>> undoubtedly remember, has signed addresses.

> Hmm. I don't think so, because a few words starting at address zero
> are usually reserved for use as system pointers. Any move that
> straddles address zero on a Transputer is almost certainly an error.

That may be so for the particular implementation that you remember, but e.g.
the documentation for the T805 (The Transputer Databook, 2nd Edition, 1989,
pg 77) says:

"Internal memory starts at the most negative address #80000000 and extends to
#80000FFF. User memory begins at #80000070; this location is given the name
MemStart. An instruction ldmemstartval is provided to obtain the value of
MemStart.
[..]
The reserved area of internal memory below MemStart is used to implement
link and event channels.
[..]
External memory space starts at #80001000 and extends up through #00000000 to
#7FFFFFFF. Memory configuration data and ROM bootstrapping code must be in the
most positive address space, starting at #7FFFFFC6 and #7FFFFFE respectively."

There is no mention of anything remotely interesting around address 0.

-marcel


Albert van der Horst

unread,
May 12, 2007, 8:13:58 AM5/12/07
to
In article <134b3ph...@news.supernews.com>,

You must be mistaken. A transputer process has special things
round address 0 but that is addressed from the workspace pointer.
Remember there are zillions of processes, each with their own
workspace pointer.
(The links and pointers to lists of processes, so amy global
specialties, are near MININT $8000,0000 )

>
>Andrew.

Groetjes Albert

Andrew Haley

unread,
May 12, 2007, 2:26:28 PM5/12/07
to
Marcel Hendrix <m...@iae.nl> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky

>> Marcel Hendrix <m...@iae.nl> wrote:
>>> Andrew Haley <andr...@littlepinkcloud.invalid> writes Re: Cases where Forth seems a little clunky
>>> [..]
>>> It would be useful on a (maxed-out) transputer, which, as you
>>> undoubtedly remember, has signed addresses.

>> Hmm. I don't think so, because a few words starting at address zero
>> are usually reserved for use as system pointers. Any move that
>> straddles address zero on a Transputer is almost certainly an error.

> That may be so for the particular implementation that you remember,
> but e.g. the documentation for the T805 (The Transputer Databook,
> 2nd Edition, 1989, pg 77) says:

> "Internal memory starts at the most negative address #80000000 and
> extends to #80000FFF. User memory begins at #80000070; this location
> is given the name MemStart. An instruction ldmemstartval is provided
> to obtain the value of MemStart.

Ah yes, quite right. I was thinking of the special addresses at
mint + small offsets, not address zero.

Andrew.

Andrew Haley

unread,
May 12, 2007, 2:28:07 PM5/12/07
to
Anton Ertl <an...@mips.complang.tuwien.ac.at> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>> Sure, there may be a system pointer that resides
>>at address zero, but that is very different from HERE, ALLOCATE, or '
>>returning zero. Simply declaring that none of these words can ever
>>return zero would have simplified the writing of standard code, and I
>>suspect that most people handling lists just assume it anyway.

> Yes, it seems very common practice to me, both on the system and on
> the programmer side, so that's a good candidate for standardisation.
> Any chance that you would volunteer for making the RfD?

Is there any point? Even the most obviously sensible suggestions seem
to get shouted down.

Andrew.

Andrew Haley

unread,
May 12, 2007, 3:13:31 PM5/12/07
to

And this has got me thinking: what did Transputer C compilers use for
NULL? I suppose they probably used mint. (i.e. the most negative
16-bit or 32-bit address.)

Andrew.

William James

unread,
May 12, 2007, 3:49:46 PM5/12/07
to
On May 12, 7:13 am, Albert van der Horst <alb...@spenarnc.xs4all.nl>
wrote:

> Remember there are zillions of processes, each with their own
> workspace pointer.

Remember there are zillions of processes, each with its own
workspace pointer.

Anton Ertl

unread,
May 12, 2007, 4:48:31 PM5/12/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>Anton Ertl <an...@mips.complang.tuwien.ac.at> wrote:
>> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>>> Sure, there may be a system pointer that resides
>>>at address zero, but that is very different from HERE, ALLOCATE, or '
>>>returning zero. Simply declaring that none of these words can ever
>>>return zero would have simplified the writing of standard code, and I
>>>suspect that most people handling lists just assume it anyway.
>
>> Yes, it seems very common practice to me, both on the system and on
>> the programmer side, so that's a good candidate for standardisation.
>> Any chance that you would volunteer for making the RfD?
>
>Is there any point?

The point would be that this would likely end up in the new standard
document, and that programmers would not have to declare an
environmental dependency on this feature if they use it.

> Even the most obviously sensible suggestions seem
>to get shouted down.

That's an interesting observation from someone who predicted
successful RfDs for invisible pink unicorns.

Anyway, for any proposal there will be criticism, some of it more
sensible, some less. Then it's up to you as proponent to decide what
to do about each of them: modify the proposal, ignore it, or abandon
the proposal. If you still think that the proposal is sensible, you
won't take the latter option; and if the others agree with you, it
will get many "I have/will implement(ed)/use(d) this proposal"
results, and it will be voted for in the committee and get into the
document.

Bruce McFarling

unread,
May 12, 2007, 6:15:13 PM5/12/07
to
On May 12, 2:28 pm, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Is there any point? Even the most obviously sensible suggestions seem
> to get shouted down.

I wasn't shouting against it, I was pointing out why ( address ) was
left as an arbitrary unsigned.

I would particularly like ' never returning 0 ... it would simplify
putting a branch rather than recursive MYSELF in an xt-based decision
table to be able to have a value known not to be an xt.

Bruce McFarling

unread,
May 12, 2007, 6:18:48 PM5/12/07
to
On May 12, 6:02 am, Andrew Haley <andre...@littlepinkcloud.invalid>
wrote:

> Sure, but I'm not sure of the relevance of cell width in this context.

If I use HEX -10 as a shorthand for $FFF0, then when I switch
computers, its going to be $FFFFFFF0.

If I use HEX FFF0 to mean $FFF0, then when I switch computers, its
still going to be $FFF0.

Jean-François Michaud

unread,
May 13, 2007, 1:34:41 AM5/13/07
to

Hold on a second. We're talking about source and destination
addresses. Addresses can't be negative, by definition. Thinking about
feeding negative numbers to the function is a non problem I would say.

Regards
Jean-Francois Michaud

Coos Haak

unread,
May 13, 2007, 7:52:59 AM5/13/07
to
Op 12 May 2007 22:34:41 -0700 schreef Jean-François Michaud:

> On May 10, 12:21 pm, Coos Haak <chfo...@hccnet.nl> wrote:
>> Op 10 May 2007 12:17:31 -0700 schreef humptydumpty:
>>
>>> or maybe:
>>
>>>: ++ ( d s -- d+ s+ ) 1 1 D+ ;
>>
>> Then, what would -10 -20 30 bcopy do?
>> -10 -20 30 move would have no problem with this.
>>
>

> Hold on a second. We're talking about source and destination
> addresses. Addresses can't be negative, by definition. Thinking about
> feeding negative numbers to the function is a non problem I would say.
>

What I wanted to say, that when d wraps from UMAX to zero, the carry will
add to s, D+ is less elegant to me for this sort of operations. See the
other threads for the existence of negative addresses.

Stephen Pelc

unread,
May 13, 2007, 9:12:36 AM5/13/07
to

All proposals get get shouted at, either for word name conflict
(but not in this case), or for some theoretical objection. In this
case it's that transputers have signed addresses. The only place
I've seen a transputer derivative (ST10) in the last few years
is in a set-top box. Are there any other extant CPUs with signed
addresses whose zero is actually useful?

I think that you can safely make the proposal.

Stephen


--
Stephen Pelc, steph...@mpeforth.com
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691
web: http://www.mpeforth.com - free VFX Forth downloads

Marcel Hendrix

unread,
May 13, 2007, 10:26:29 AM5/13/07
to
steph...@mpeforth.com (Stephen Pelc) writes Re: Cases where Forth seems a little clunky

> On Sat, 12 May 2007 18:28:07 -0000, Andrew Haley
> <andr...@littlepinkcloud.invalid> wrote:

[..]


>>Is there any point? Even the most obviously sensible suggestions seem
>>to get shouted down.

> All proposals get get shouted at, either for word name conflict
> (but not in this case), or for some theoretical objection. In this
> case it's that transputers have signed addresses. The only place
> I've seen a transputer derivative (ST10) in the last few years
> is in a set-top box. Are there any other extant CPUs with signed
> addresses whose zero is actually useful?

I didn't notice I was shouting.

Of course, if you research it enough, just about any CPU variant can be found,
no matter how brain-dead it might be. (E.g., the transputer actually *halted*
on arithmetic overflow ;-) This does not mean that a single counter example
from some pathological piece of hardware suffices to shoot down a useful proposal.

> I think that you can safely make the proposal.

That would be fine with me, I'd vote in favor.

BTW, the transputer had signed addresses so that its ALU could do double-duty
for address computation, and other quirks resulted from the desire not to have
the cost of a flag register (more state to save during task switching).

-marcel

Jonah Thomas

unread,
May 13, 2007, 12:33:42 PM5/13/07
to
Jean-François Michaud <com...@comcast.net> wrote:

> Coos Haak <chfo...@hccnet.nl> wrote:
> > schreef humptydumpty:
> >
> > > or maybe:
> >
> > >: ++ ( d s -- d+ s+ ) 1 1 D+ ;
> >
> > Then, what would -10 -20 30 bcopy do?
> > -10 -20 30 move would have no problem with this.
>
> Hold on a second. We're talking about source and destination
> addresses. Addresses can't be negative, by definition. Thinking about
> feeding negative numbers to the function is a non problem I would say.

What we want is for the two different single numbers to increment
independently.

What we get with D+ is the possibility that a carry will result in one
of them getting the wrong result.

In this particular case, I think we should get a carry only when d is
the largest possible unsigned number.

If you suppose that the destination address will never wrap around from
the highest possible address to zero, then it ought to be OK. Shouldn't
it?

Let's see, when you add two singles you get the same bit pattern whether
you treat them as signed or unsigned. So for singles it doesn't matter.
And when it's doubles, each double number is just like a 2n-bit single,
with one sign bit at the top that could be treated as the highest
unsigned bit. So it ought to work just like singles, right? It won't
matter if the high cells of the doubles are signed or unsigned, you'll
get the same bit pattern when you add them. The only issue is that
carry, and since you only add one you only get the carry in that one
particular case.

So I think it ought to in every case you could possibly care about. But
it's tricky code, hard to read. I had to think about whether it would
work. I'd use it sometimes, if it happened to be faster or if it was
shorter and those were important. ( swap 1+ swap 1+ is 4 words, 1 1 d+
is 3 words provided 1 is a constant. But I was recently informed that
one optimising compiler can optimise away a couple of swaps and can't
optimise away D+ .)

Marcel Hendrix

unread,
May 13, 2007, 12:48:11 PM5/13/07
to
Jonah Thomas <j2th...@cavtel.net> wrote Re: Cases where Forth seems a little clunky
[..]

> I had to think about whether it would
> work. I'd use it sometimes, if it happened to be faster or if it was
> shorter and those were important. ( swap 1+ swap 1+ is 4 words, 1 1 d+
> is 3 words provided 1 is a constant. But I was recently informed that
> one optimising compiler can optimise away a couple of swaps and can't
> optimise away D+ .)

iForth version 2.1.2545, generated 18:23:57, January 28, 2007.
i6 binary, native floating-point, double precision.
Copyright 1996 - 2007 Marcel Hendrix.

FORTH> : ++ 1 1 D+ ; ok
FORTH> SEE ++
Flags: TOKENIZE, ANSI
: ++ 1 1 D+ ; ok
FORTH> : test -10 -20 ++ ; ok
FORTH> SEE test
Flags: TOKENIZE, ANSI
: test -10 -20 ++ ; ok
FORTH> ' test idis
$0052F380 : [trashed]
$0052F388 push #247 b#
$0052F38A push #237 b#
$0052F38C ;
FORTH> : tt test + ; ' tt idis
$0052F400 : [trashed]
$0052F408 push #228 b#
$0052F40A ;
FORTH> : uu test + drop ; ' uu idis
$0052F480 : [trashed]
$0052F488 ;

-marcel

Albert van der Horst

unread,
May 13, 2007, 11:38:14 AM5/13/07
to
In article <2629391...@frunobulax.edu>, Marcel Hendrix <m...@iae.nl> wrote:
>steph...@mpeforth.com (Stephen Pelc) writes Re: Cases where Forth seems a little clunky
>
<SNIP>

>Of course, if you research it enough, just about any CPU variant can be found,
>no matter how brain-dead it might be. (E.g., the transputer actually *halted*
>on arithmetic overflow ;-) This does not mean that a single counter example
>from some pathological piece of hardware suffices to shoot down a useful proposal.

Halting on error on a transputer must be switched on.
Like all features of the transputer this was very well thought-out.

In the 80's I wrote an interpreter for general purpose modelling that
did just that. It stopped on arithmetic errors. Most simulation
systems at the time crashed. Our simulation system didn't, no matter
what model it was calculating. The calculation process ran a sort of
Forth interpreter under control of a c-program. It got some admiration
from a visiting professor, and he will still remember the casual
sentence " of course MANIP doesn't crash."

Given the purpose of transputers, (they were used to do e.g.
month's of quantum chromodynamic calculations) it is perfectly
sensible to have the state of an offending calculator exactly
preserved. Remember, you can get the whole state of halted processor
by using the links in a special way.

So halting on arithmetic overflow can be sensible, maybe not in the
context of control systems without a watchdog. (Are there any?).

<SNIP>

>
>-marcel

foxchip

unread,
May 13, 2007, 6:52:00 PM5/13/07
to
On May 12, 10:34 pm, Jean-François Michaud <come...@comcast.net>
wrote:

> Hold on a second. We're talking about source and destination
> addresses. Addresses can't be negative, by definition. Thinking about
> feeding negative numbers to the function is a non problem I would say.
>
> Regards
> Jean-Francois Michaud

MuP21, F21, and i21 all have a 20-bit external data bus with a 20-bit
address range for DRAM. The 21-bit alu and 21-bit stack values can be
considered as 20-bit numbers with carry; and that fits the 20-bit wide
external data bus. As 20-bit unsigned numbers the DRAM address
range is 0 to FFFFF and as signed numbers the range of DRAM
decode is 0 to -1. 20-bit negative numbers as addresses
select on half of the main DRAM address space.

Bit-20 in the ALU, stack, and control registers when used in
adddressing controls SRAM/ROM/Register select. If you think
of it as 21-bit address space then negative numbers select
SRAM/ROM/Register spaces. On F21 and i21 the interupt vector
is 0 in DRAM (0) or 0 in SRAM if the homepage was set to SRAM.
0 is not just a valid address but an important one.

I have seen hardware and software implemenations of bit-threading
where the msb of the address space selects between threaded
code address lists and addresses of CODE subroutines. In both
cases 0 is a valid address and negative addresses are valid. I
think this applied to Novix.

On c18 there are internal and external address spaces. Internal
SRAM spaces start at 0 and high addressing bits are currently
undefined. External addresses start at zero and the msb is
undefined on one prototype but used on another so that
negative addresses might be valid memory addresses.

Best Wishes

Gerry

unread,
May 14, 2007, 3:44:49 AM5/14/07
to

I remember a 16 bit Forth on a member of the 68000 family where the
Forth address was a signed offset from one of the address registers,
hence 0 was a valid address.

Rather than specify that 0 is the null address in Forth 200X wouldn't
it be better to have an environment query that returned the null
address for a system.

Gerry

Anton Ertl

unread,
May 14, 2007, 4:31:11 AM5/14/07
to
Gerry <ge...@jackson9000.fsnet.co.uk> writes:
>Rather than specify that 0 is the null address in Forth 200X wouldn't
>it be better to have an environment query that returned the null
>address for a system.

No. But unless somebody makes an RfD on this topic, I don't see any
point in discussing this again.

Andrew Haley

unread,
May 14, 2007, 5:03:03 AM5/14/07
to
Gerry <ge...@jackson9000.fsnet.co.uk> wrote:

> I remember a 16 bit Forth on a member of the 68000 family where the
> Forth address was a signed offset from one of the address registers,
> hence 0 was a valid address.

The question isn't whether address 0 is valid, but whether any
standard word can ever return it.

> Rather than specify that 0 is the null address in Forth 200X wouldn't
> it be better to have an environment query that returned the null
> address for a system.

There is never any need for that. If you want a unique sentinel for a
list you can always say

create nil

and that's it.

Andrew.

Anton Ertl

unread,
May 14, 2007, 5:18:40 AM5/14/07
to
Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>There is never any need for that. If you want a unique sentinel for a
>list you can always say
>
>create nil
>
>and that's it.

The next created word or variable could have the same address
(consider separate-header systems). Better make that

variable nil

Andrew Haley

unread,
May 14, 2007, 5:30:21 AM5/14/07
to
Anton Ertl <an...@mips.complang.tuwien.ac.at> wrote:
> Andrew Haley <andr...@littlepinkcloud.invalid> writes:
>>There is never any need for that. If you want a unique sentinel for a
>>list you can always say
>>
>>create nil
>>
>>and that's it.

> The next created word or variable could have the same address
> (consider separate-header systems).

Ouch, you're right! That is a very nasty possible side-effect of
separated headers.

Thanks,
Andrew.

Coos Haak

unread,
May 14, 2007, 2:50:16 PM5/14/07
to
Op Mon, 14 May 2007 09:30:21 -0000 schreef Andrew Haley:

It's not so much separate headers, but separation of code and data ;-)

Jerry Avins

unread,
May 14, 2007, 11:18:48 PM5/14/07
to

Doesn't that depend on whether a particular bunch of bits is construed
as signed or unsigned? Is binary 11111111111111111111111111111111 -1 or
4294967295? How does the address decoder know?

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Jean-François Michaud

unread,
May 15, 2007, 10:30:07 PM5/15/07
to
On May 14, 8:18 pm, Jerry Avins <j...@ieee.org> wrote:

> ? wrote:
> > On May 10, 12:21 pm, Coos Haak <chfo...@hccnet.nl> wrote:
> >> Op 10 May 2007 12:17:31 -0700 schreef humptydumpty:
>
> >>> or maybe:
> >>> : ++ ( d s -- d+ s+ ) 1 1 D+ ;
> >> Then, what would -10 -20 30 bcopy do?
> >> -10 -20 30 move would have no problem with this.
>
> >> --
> >> Coos
>
> >> CHForth, 16 bit DOS applicationshttp://home.hccnet.nl/j.j.haak/forth.html
>
> > Hold on a second. We're talking about source and destination
> > addresses. Addresses can't be negative, by definition. Thinking about
> > feeding negative numbers to the function is a non problem I would say.
>
> Doesn't that depend on whether a particular bunch of bits is construed
> as signed or unsigned? Is binary 11111111111111111111111111111111 -1 or
> 4294967295? How does the address decoder know?

We make that distinction, the computer doesn't have to. An address is
an address (no matter how we, as human beings, decide to interpret it,
it is only a sequence of bits to the computer). If you like to think
of it as signed numbers then isn't this simply a useless higher
abstraction veil that gets in the way of thoughts?

Also, one might be inclined to think, if such a framework of mind is
used that "-1" is a smaller address than say 1 when it is more
relevant, not to mention usually architecturally correct, to think of
"-1" as a much larger address than 1; the largest on a 32 bit
architecture. Why the mind twists? Do they bring anything more to the
table than what the straightforward view allows?

Regards
Jean-Francois Michaud

humptydumpty

unread,
May 16, 2007, 2:24:03 AM5/16/07
to
hi

Jean-François Michaud wrote:
> We make that distinction, the computer doesn't have to. An address is
> an address (no matter how we, as human beings, decide to interpret it,
> it is only a sequence of bits to the computer). If you like to think
> of it as signed numbers then isn't this simply a useless higher
> abstraction veil that gets in the way of thoughts?

> Also, one might be inclined to think, if such a framework of mind is
> used that "-1" is a smaller address than say 1 when it is more
> relevant, not to mention usually architecturally correct, to think of
> "-1" as a much larger address than 1; the largest on a 32 bit
> architecture. Why the mind twists? Do they bring anything more to the
> table than what the straightforward view allows?

maybe the source of confusion is that addresses are
exprimed by a value(signed or unsigned) of size
integer of cpu. should an address be identified with
an simple integer(signed or unsigned)? maybe
concept of address = integer + some conventions.
what should be these conventions?

i don't know about other cpu-s, but on x86,
negative adresses [-1, ...] could hint to a stack
on top of segment, stack that grows downward,
while positive adresses remind me a memory heap.

unexpected to me was that exists processors that access
memory from negative to positive addresses.

> Regards
> Jean-Francois Michaud

best regards,
humptydumpty

Jean-François Michaud

unread,
May 16, 2007, 1:28:29 PM5/16/07
to
On May 16, 2:24 am, humptydumpty <ouat...@yahoo.com> wrote:
> hi
>
> Jean-François Michaud wrote:
> > We make that distinction, the computer doesn't have to. An address is
> > an address (no matter how we, as human beings, decide to interpret it,
> > it is only a sequence of bits to the computer). If you like to think
> > of it as signed numbers then isn't this simply a useless higher
> > abstraction veil that gets in the way of thoughts?
> > Also, one might be inclined to think, if such a framework of mind is
> > used that "-1" is a smaller address than say 1 when it is more
> > relevant, not to mention usually architecturally correct, to think of
> > "-1" as a much larger address than 1; the largest on a 32 bit
> > architecture. Why the mind twists? Do they bring anything more to the
> > table than what the straightforward view allows?
>
> maybe the source of confusion is that addresses are
> exprimed by a value(signed or unsigned) of size
> integer of cpu. should an address be identified with
> an simple integer(signed or unsigned)? maybe
> concept of address = integer + some conventions.
> what should be these conventions?

The source of confusion as you say, in my mind, is adding a useless
layer of complexity (thinking about addresses as positive or negative
addresses) to something that can be simple to think about. When I
think of an address in memory, I think of a contiguous chunk that
starts at 0 and end wherever the computer allows me to go (32 bits
worth in my case since I'm working on a PC architecture). This
absolute address must be unique. I see no advantage to having negative
absolute addresses, but maybe somebody can explain us the rationale
behind the idea. I'm not talking about relative addresses, that's a
different story; one can imagine that the absolute address is layed
down somewhere in a register, the relative positioning being either
positive or negative.

[SNIP]

> unexpected to me was that exists processors that access
> memory from negative to positive addresses.

Are you talking about memory offsets or absolute addresses?

Regards
Jean-Francois Michaud

Jerry Avins

unread,
May 16, 2007, 1:56:43 PM5/16/07
to

I'm not suggesting that you twist your mind, rather that negative
addresses can have a simple interpretation if one wants to use it. Mack
in the dark ages when the monitor ROM (and perhaps CP/M jump table) was
located at the top of memory, with sometimes as little as 24K of RAM,
thinking of the monitor addresses as negative provided a semblance of
contiguity. With that convention, the useful addresses ran from -800 to
+5FFF. If one likes it, why not?

Roger Ivie

unread,
May 16, 2007, 2:25:39 PM5/16/07
to
On 2007-05-16, Jerry Avins <j...@ieee.org> wrote:
> I'm not suggesting that you twist your mind, rather that negative
> addresses can have a simple interpretation if one wants to use it. Mack
> in the dark ages when the monitor ROM (and perhaps CP/M jump table) was
> located at the top of memory, with sometimes as little as 24K of RAM,
> thinking of the monitor addresses as negative provided a semblance of
> contiguity. With that convention, the useful addresses ran from -800 to
> +5FFF. If one likes it, why not?

A more recent example involves the transition from VAX to Alpha.
VAX/VMS, being an operating system for 32-bit machines, uses 32-bit
addresses.

In Alpha/VMS, a 32-bit address is signed. It is extended to 64 bits by
sign extension. This places the traditional user program address space
at the bottom of the map, where it always was, and the traditional
system address space (0x80000000 and up) at the top of the address map,
where it always was, opening the space between them for extending the
address space.

In other words, Alpha/VMS treats 32-bit addresses as signed and 64-bit
addresses as unsigned. A 32-bit program can continue using 32-bit
addresses as it always has. A 64-bit program can make use of the new
address space between the old spaces.
--
roger ivie
ri...@ridgenet.net

humptydumpty

unread,
May 16, 2007, 4:38:12 PM5/16/07
to
hi Francois,

Jean-François Michaud wrote:


> humptydumpty wrote:
> > unexpected to me was that exists processors that access
> > memory from negative to positive addresses.
> Are you talking about memory offsets or absolute addresses?

i'm talking about absolute addresses. as in transputer case,
: ++ 1 1 d+ ; is just a bug. but on case of positive absolute
addresses it could be used, it have no sense to increment pointer
over upper limit of memory (segment).

mistaken i was when i thinked that on any platform lower limit of
memory (segment) = 0 = lower limit of unsigned integer ( in other
words, address = unsigned_int is not true everywhere.)

John Passaniti

unread,
May 16, 2007, 5:51:05 PM5/16/07
to
Jerry Avins wrote:
> I'm not suggesting that you twist your mind, rather that negative
> addresses can have a simple interpretation if one wants to use it. Mack
> in the dark ages when the monitor ROM (and perhaps CP/M jump table) was
> located at the top of memory, with sometimes as little as 24K of RAM,
> thinking of the monitor addresses as negative provided a semblance of
> contiguity. With that convention, the useful addresses ran from -800 to
> +5FFF. If one likes it, why not?

For some reason, some part of my brain still remembers that to get into
the Apple ]['s monitor, you would type "call -151". I haven't touched
an Apple ][ (or a simulation) in years.

Some people get bent out of shape over the interpretation of numbers. I
remember that I wasn't alone in calling character codes above 127 as
"negative ASCII." Well, it's only negative if you choose to interpret
the bit pattern as negative-- same as addresses.

I'm not aware of any computer architecture where "negative addresses"
actually mean anything more than the highest bit is set. But there are
plenty of marginal architectures I'm constantly surprised by.

Jerry Avins

unread,
May 16, 2007, 6:31:01 PM5/16/07
to
John Passaniti wrote:
> Jerry Avins wrote:
>> I'm not suggesting that you twist your mind, rather that negative
>> addresses can have a simple interpretation if one wants to use it.
>> Mack in the dark ages when the monitor ROM (and perhaps CP/M jump
>> table) was located at the top of memory, with sometimes as little as
>> 24K of RAM, thinking of the monitor addresses as negative provided a
>> semblance of contiguity. With that convention, the useful addresses
>> ran from -800 to +5FFF. If one likes it, why not?
>
> For some reason, some part of my brain still remembers that to get into
> the Apple ]['s monitor, you would type "call -151". I haven't touched
> an Apple ][ (or a simulation) in years.

Sure. That's as good as FEAF (or FF69 is the 151 is decimal)

> Some people get bent out of shape over the interpretation of numbers. I
> remember that I wasn't alone in calling character codes above 127 as
> "negative ASCII." Well, it's only negative if you choose to interpret
> the bit pattern as negative-- same as addresses.

Many of the analog<-->digital converters I used to use were offset
binary. That made reading the numbers interesting!

> I'm not aware of any computer architecture where "negative addresses"
> actually mean anything more than the highest bit is set. But there are
> plenty of marginal architectures I'm constantly surprised by.

In what I wrote, I assumed that. There could be exceptions. At times I
obfuscated code by scrambling the address and data lines lines to my
ROMS. It made casual snooping hard. The address lines were fairly easily
scoped out by a determined snooper, but the data lines were hard unless
you thought of writing and capturing the data with a signal analyzer.

Jean-François Michaud

unread,
May 16, 2007, 9:39:51 PM5/16/07
to

A nice mind twist to circumvent a badly planed out design. I still
don't see any use for actual negative absolute addresses other than
making life more complicated than it needs to be (but maybe I'm not
seeing a clear advantage to this method). I have a tendency to dislike
creating semantic noise for myself.

> If one likes it, why not?

And a very bad rule in general. Because a more correct path isn't
easily seen doesn't necessarilly mean that whatever path is found
should be walked.

Regards
Jean-Francois Michaud

John Passaniti

unread,
May 17, 2007, 1:32:06 AM5/17/07
to
Jean-François Michaud wrote:
> A nice mind twist to circumvent a badly planed out design. I still
> don't see any use for actual negative absolute addresses other than
> making life more complicated than it needs to be (but maybe I'm not
> seeing a clear advantage to this method). I have a tendency to dislike
> creating semantic noise for myself.

Where is the complication? I can certainly see some people not liking
the use of negative numbers for addresses for reasons ranging from
aesthetic choice or love of convention. But why does a negative number
introduce any *complication* as you have claimed?

Let's try a real world example. On HC08 platforms, the reset vector is
located two bytes prior to the end of memory. How would you reference
that address in code? I wouldn't have any problem calling that address
as either 0xfffe or -2. Both are fully equivalent, and -2 has the
additional property of better documenting the meaning of the address.

>> If one likes it, why not?
>
> And a very bad rule in general. Because a more correct path isn't
> easily seen doesn't necessarilly mean that whatever path is found
> should be walked.

You're claiming that one "path" is "more correct" but not providing any
supporting argument for that claim. But that's par for the course here
in comp.lang.forth, where personal preference and arbitrary choices are
often elevated as being The Right Way we should all follow.

Anton Ertl

unread,
May 17, 2007, 2:32:06 AM5/17/07
to
John Passaniti <nn...@JapanIsShinto.com> writes:
>I'm not aware of any computer architecture where "negative addresses"
>actually mean anything more than the highest bit is set.

Where does it play a role? AFAICS only in comparing addresses:

- Having a value that's guaranteed not to be a valid address (the NULL
address). This has been discussed before, even in this thread.

- When comparing which of two addresses is smaller, do we use U< or <?
ANS Forth says that addresses are unsigned, so we should use U<.

In ANS Forth, it only makes sense to compare addresses coming from the
same contiguous region, so the effect of this unsignedness on systems
is that they can create contiguous regions that contain the address
$80..00 boundary, but must not create contiguous regions that contain
the address 0 (this, plus the role of 0 in words like IF make 0 an
ideal candidate for the NULL address); or more precisely, the
contiguous region must not start at an address that has the highest
bit set and end at an address that has the highest bit clear.

If somebody wants to implement ANS Forth on a transputer, he has to
find a way to ensure this. And that's not particularly hard; one
approach is to add a bias value when translating between Forth
addresses and native addresses (similar to what the 68000
implementation mentioned elsewhere in this thread did in order to turn
the addresses into signed addresses), but there are certainly cheaper
ways to achieve this.

Ed

unread,
May 17, 2007, 3:20:02 AM5/17/07
to

"Bruce McFarling" <agi...@netscape.net> wrote in message news:1179008113.2...@e65g2000hsc.googlegroups.com...
> On May 12, 2:28 pm, Andrew Haley <andre...@littlepinkcloud.invalid>

> wrote:
> > Is there any point? Even the most obviously sensible suggestions seem
> > to get shouted down.
>
> I wasn't shouting against it, I was pointing out why ( address ) was
> left as an arbitrary unsigned.
>
> I would particularly like ' never returning 0 ... it would simplify
> putting a branch rather than recursive MYSELF in an xt-based decision
> table to be able to have a value known not to be an xt.

Is the issue about "address 0" or about xt = 0 ?
If it's the latter then one only has to define that.

John Doty

unread,
May 17, 2007, 8:25:48 AM5/17/07
to

An advantage of having a flag register separate from the stack is that
every word gets a "side channel" to communicate success or failure
through. This eliminates most of the need for special magic return values.

--
John Doty, Noqsi Aerospace, Ltd.
http://www.noqsi.com/
--
Specialization is for robots.

Jerry Avins

unread,
May 17, 2007, 8:27:12 AM5/17/07
to
John Passaniti wrote:

Just drop it. There was a time when negative numbers were thought to be
illegitimate solutions to any numeric problem. Some people just haven't
caught up. I wonder how Jean-Francois would react if the addresses in
one of the fields in a Harvard-architecture machine were prefixed with
-- Heaven forfend! -- 'i'?

Andreas Kochenburger

unread,
May 17, 2007, 8:36:35 AM5/17/07
to
John Doty wrote:
> An advantage of having a flag register separate from the stack is that
> every word gets a "side channel" to communicate success or failure
> through. This eliminates most of the need for special magic return values.

When you feed an "errno" register with every word, then you can also
throw an exception. Speed and size penalties are of the same magnitude.

Andreas

Jean-François Michaud

unread,
May 17, 2007, 12:55:22 PM5/17/07
to
On May 17, 8:27 am, Jerry Avins <j...@ieee.org> wrote:
> John Passaniti wrote:


Hahah, typical Jerry, still not trying to understand anybody else's
point of view. Sure Jerry, I haven't picked up; I don't understand and
you do because I don't agree with you. Once again, you figured
everything out.

Jean-Francois Michaud

Jean-François Michaud

unread,
May 17, 2007, 1:08:48 PM5/17/07
to
On May 17, 1:32 am, John Passaniti <put-my-first-name-

h...@JapanIsShinto.com> wrote:
> Jean-François Michaud wrote:
> > A nice mind twist to circumvent a badly planed out design. I still
> > don't see any use for actual negative absolute addresses other than
> > making life more complicated than it needs to be (but maybe I'm not
> > seeing a clear advantage to this method). I have a tendency to dislike
> > creating semantic noise for myself.
>
> Where is the complication? I can certainly see some people not liking
> the use of negative numbers for addresses for reasons ranging from
> aesthetic choice or love of convention. But why does a negative number
> introduce any *complication* as you have claimed?

Simply because an additional concept has to be incorporated into the
loop when the same thing can be described more simply. Call it Occam's
Razor, semantic bloat, whatever you like. The simplest solution takes
all. I can probably find a way to describe the same thing using
imaginary numbers but that wouldn't add anything to the solution other
than complicating it further without actually describing anything
more.

I don't know how adamant you are about allowing bloat in software, I'm
equally harsh on semantic noise when I perceive it.

> Let's try a real world example. On HC08 platforms, the reset vector is
> located two bytes prior to the end of memory. How would you reference
> that address in code? I wouldn't have any problem calling that address
> as either 0xfffe or -2. Both are fully equivalent, and -2 has the
> additional property of better documenting the meaning of the address.

FFFE for me. So how does -2 document the meaning of the address more
than FFFE does?

> >> If one likes it, why not?
>
> > And a very bad rule in general. Because a more correct path isn't
> > easily seen doesn't necessarilly mean that whatever path is found
> > should be walked.
>
> You're claiming that one "path" is "more correct" but not providing any
> supporting argument for that claim. But that's par for the course here
> in comp.lang.forth, where personal preference and arbitrary choices are
> often elevated as being The Right Way we should all follow.

The argument is Occam's Razor. Whatever is the simplest takes it away.
If you can show me that using negative absolute addresses has an
advantage over the view of only having positive addresses, then I'll
switch if the advantages are significant and relevant without adding
potential caveats or problems. One can certainly do plenty of tricks
using this concept, no doubt, but I'm pretty sure, from what I can
see, that I can describe anything I care to describe using only
positive addresses. This is conceptually simpler.

Bloat isn't only in software. There is such a thing as semantic bloat
also.

Regards
Jean-Francois Michaud

John Passaniti

unread,
May 17, 2007, 2:50:38 PM5/17/07
to
Jean-François Michaud wrote:
>> Where is the complication? I can certainly see some people not liking
>> the use of negative numbers for addresses for reasons ranging from
>> aesthetic choice or love of convention. But why does a negative number
>> introduce any *complication* as you have claimed?
>
> Simply because an additional concept has to be incorporated into the
> loop when the same thing can be described more simply. Call it Occam's
> Razor, semantic bloat, whatever you like. The simplest solution takes
> all. I can probably find a way to describe the same thing using
> imaginary numbers but that wouldn't add anything to the solution other
> than complicating it further without actually describing anything
> more.

This makes no sense to me. There is no "additional concept" here. A
number is a number-- in the example I provided the bit pattern for -2
and $fffe is exactly the same. How is something more complex if it's
exactly the same?

> I don't know how adamant you are about allowing bloat in software, I'm
> equally harsh on semantic noise when I perceive it.

Are we even talking about the same thing? There is no bloat in choosing
to use a different representation. Not one byte of code changes in the
following:

-2 @ .
verses
$fffe @ .

The difference is purely in the *source* code, and purely up to the
experience level, aesthetic choices, and meaning the programmer wants to
apply to the numbers.

>> Let's try a real world example. On HC08 platforms, the reset vector is
>> located two bytes prior to the end of memory. How would you reference
>> that address in code? I wouldn't have any problem calling that address
>> as either 0xfffe or -2. Both are fully equivalent, and -2 has the
>> additional property of better documenting the meaning of the address.
>
> FFFE for me. So how does -2 document the meaning of the address more
> than FFFE does?

Go back to what I wrote. I said that the address was two bytes prior to
the end of memory. The number -2 maps directly to the phrase "two bytes
prior". The number $fffe does not.

The issue is choosing a numeric representation that matches the problem
you're trying to solve. We're talking about negative numbers, but it
also applies to other number bases. I recently wrote about a case where
I found operating in base 36 was the most natural for the problem I was
trying to solve. And there are some interesting properties of other
number bases for other classes of problems. The underlying numbers that
come out of those other bases are still just numbers. It's all just a
matter of how you choose to interpret the bits.

>> You're claiming that one "path" is "more correct" but not providing any
>> supporting argument for that claim. But that's par for the course here
>> in comp.lang.forth, where personal preference and arbitrary choices are
>> often elevated as being The Right Way we should all follow.
>
> The argument is Occam's Razor. Whatever is the simplest takes it away.
> If you can show me that using negative absolute addresses has an
> advantage over the view of only having positive addresses, then I'll
> switch if the advantages are significant and relevant without adding
> potential caveats or problems.

I just did. An interrupt vector located two bytes prior to the end of
memory is more naturally represented (in my mind) as -2, not as $fffe.

> One can certainly do plenty of tricks
> using this concept, no doubt, but I'm pretty sure, from what I can
> see, that I can describe anything I care to describe using only
> positive addresses. This is conceptually simpler.

This is your core problem. You are seeing addresses as positive or
negative. This is false. Addresses are best thought of not as positive
or negative, but as a pattern of bits that drives hardware. Your mind
has created two different classes of numbers for addresses-- it's you
who has created complexity where there is none.

> Bloat isn't only in software. There is such a thing as semantic bloat
> also.

Yes, and you're guilty of it. You're the one envisioning that there are
positive and negative addresses, not me. I see these all as bit
patterns that drive hardware, and how I choose to represent those bit
patterns (positive numbers, negative numbers, numbers in other bases,
various bit-swizzling, etc.) doesn't matter.

Jerry Avins

unread,
May 17, 2007, 5:18:51 PM5/17/07
to
� wrote:

...

> Hahah, typical Jerry, still not trying to understand anybody else's
> point of view. Sure Jerry, I haven't picked up; I don't understand and
> you do because I don't agree with you. Once again, you figured
> everything out.

I understand that you assume that whatever you see no use for is
useless. I don't advocate negative addresses, but unlike you, I
understand how someone could choose to use them. You wrote, "I still


don't see any use for actual negative absolute addresses other than
making life more complicated than it needs to be (but maybe I'm not
seeing a clear advantage to this method). I have a tendency to dislike

creating semantic noise for myself." You don't see a use, so you want to
discourage others from finding one. How nice!

Jean-François Michaud

unread,
May 17, 2007, 8:17:45 PM5/17/07
to
On May 17, 2:50 pm, John Passaniti <n...@JapanIsShinto.com> wrote:
> Jean-François Michaud wrote:
> >> Where is the complication? I can certainly see some people not liking
> >> the use of negative numbers for addresses for reasons ranging from
> >> aesthetic choice or love of convention. But why does a negative number
> >> introduce any *complication* as you have claimed?
>
> > Simply because an additional concept has to be incorporated into the
> > loop when the same thing can be described more simply. Call it Occam's
> > Razor, semantic bloat, whatever you like. The simplest solution takes
> > all. I can probably find a way to describe the same thing using
> > imaginary numbers but that wouldn't add anything to the solution other
> > than complicating it further without actually describing anything
> > more.
>
> This makes no sense to me. There is no "additional concept" here. A
> number is a number-- in the example I provided the bit pattern for -2
> and $fffe is exactly the same. How is something more complex if it's
> exactly the same?

Negative numbers is an extension to the concept of numbers and there's
a good reason why it encountered resistance up until the 17th century
in Europe. It was point used by a few standing trees here and there
but not as a widespread idea. The reason is it requires just that, an
extension to the concept of numbers. I don't have a problem with
negative numbers, it's a useful concept, but when adequate.

All I'm saying is that it is not necessary HERE. Because you can use
it, and because it so happens to map to the way you think about it in
English, doesn't mean you should use it. The question is: does it make
sense to use it? I come to the conclusion that it isn't necessary
which makes the whole notion of negative number representation drop to
the ground. Simpler.

I'm very much aware that choosing a correct base to work with can be
more adequate, but this is an entirely different issue which is not
directly relevant to this matter.

> >> You're claiming that one "path" is "more correct" but not providing any
> >> supporting argument for that claim. But that's par for the course here
> >> in comp.lang.forth, where personal preference and arbitrary choices are
> >> often elevated as being The Right Way we should all follow.
>
> > The argument is Occam's Razor. Whatever is the simplest takes it away.
> > If you can show me that using negative absolute addresses has an
> > advantage over the view of only having positive addresses, then I'll
> > switch if the advantages are significant and relevant without adding
> > potential caveats or problems.
>
> I just did. An interrupt vector located two bytes prior to the end of
> memory is more naturally represented (in my mind) as -2, not as $fffe.

Use -2 then, what can I say. You have shown me that the "advantage" of
using negative number representation for addresses in programming
accomodates verbal twist. Hardly advantageous at all. It's a glass
half full vs glass half empty argument.

> > One can certainly do plenty of tricks
>
> > using this concept, no doubt, but I'm pretty sure, from what I can
> > see, that I can describe anything I care to describe using only
> > positive addresses. This is conceptually simpler.
>
> This is your core problem. You are seeing addresses as positive or
> negative. This is false.

I was refering to positive addresses because I was stressing out the
oppositve of negative number representation. I think of addresses as a
unique sequence of bits which I would NOT represent using negative
numbers.

Addresses are best thought of not as positive
> or negative, but as a pattern of bits that drives hardware. Your mind
> has created two different classes of numbers for addresses-- it's you
> who has created complexity where there is none.

Nope.

> > Bloat isn't only in software. There is such a thing as semantic bloat
> > also.
>
> Yes, and you're guilty of it. You're the one envisioning that there are
> positive and negative addresses, not me. I see these all as bit
> patterns that drive hardware, and how I choose to represent those bit
> patterns (positive numbers, negative numbers, numbers in other bases,
> various bit-swizzling, etc.) doesn't matter.

Sure, tell me what I think instead of asking me; you're obviously in a
better position than me to let me know what I think about the subject.

Regards
Jean-Francois Michaud

Jean-François Michaud

unread,
May 17, 2007, 8:21:25 PM5/17/07
to
On May 17, 5:18 pm, Jerry Avins <j...@ieee.org> wrote:

> ? wrote:
>
> ...
>
> > Hahah, typical Jerry, still not trying to understand anybody else's
> > point of view. Sure Jerry, I haven't picked up; I don't understand and
> > you do because I don't agree with you. Once again, you figured
> > everything out.
>
> I understand that you assume that whatever you see no use for is
> useless. I don't advocate negative addresses, but unlike you, I
> understand how someone could choose to use them. You wrote, "I still
> don't see any use for actual negative absolute addresses other than
> making life more complicated than it needs to be (but maybe I'm not
> seeing a clear advantage to this method). I have a tendency to dislike
> creating semantic noise for myself." You don't see a use, so you want to
> discourage others from finding one. How nice!

Oh, he's good.

John Passaniti

unread,
May 17, 2007, 8:37:16 PM5/17/07
to
Jean-François Michaud wrote:
>> Yes, and you're guilty of it. You're the one envisioning that there are
>> positive and negative addresses, not me. I see these all as bit
>> patterns that drive hardware, and how I choose to represent those bit
>> patterns (positive numbers, negative numbers, numbers in other bases,
>> various bit-swizzling, etc.) doesn't matter.
>
> Sure, tell me what I think instead of asking me; you're obviously in a
> better position than me to let me know what I think about the subject.

Apparently, I am since your larger response was entirely consistent with
what I wrote above. Your yourself stated that you view negative
addresses as something different than positive addresses, which is the
source of your comments here. So tell me how I got your stunning
argument wrong? I didn't.

In any case, the problem is in your head, not mine. The computer
doesn't care if I represent an address as $fffe, -2, or any other
representation. And as a programmer who understands the underlying
hardware and doesn't pretend that hardware cares about how I choose to
represent numbers at the source level, I'll chalk our difference up to
one of experience.

I do look forward to future messages from you, and seeing what other
arbitrary aesthetic choices you will offer as The Right Way the rest of
us should think.

Jean-François Michaud

unread,
May 17, 2007, 8:50:44 PM5/17/07
to
On May 17, 8:37 pm, John Passaniti <n...@JapanIsShinto.com> wrote:
> Jean-François Michaud wrote:
> >> Yes, and you're guilty of it. You're the one envisioning that there are
> >> positive and negative addresses, not me. I see these all as bit
> >> patterns that drive hardware, and how I choose to represent those bit
> >> patterns (positive numbers, negative numbers, numbers in other bases,
> >> various bit-swizzling, etc.) doesn't matter.
>
> > Sure, tell me what I think instead of asking me; you're obviously in a
> > better position than me to let me know what I think about the subject.
>
> Apparently, I am since your larger response was entirely consistent with
> what I wrote above. Your yourself stated that you view negative
> addresses as something different than positive addresses, which is the
> source of your comments here. So tell me how I got your stunning
> argument wrong? I didn't.
>
> In any case, the problem is in your head, not mine. The computer
> doesn't care if I represent an address as $fffe, -2,

Really?! I thought they cared! This is brand new information!

or any other
> representation. And as a programmer who understands the underlying
> hardware and doesn't pretend that hardware cares about how I choose to
> represent numbers at the source level, I'll chalk our difference up to
> one of experience.

Yeah, you're much better than me.

> I do look forward to future messages from you, and seeing what other
> arbitrary aesthetic choices you will offer as The Right Way the rest of
> us should think.

And you, of course, conveniently snip off the relevant portions of the
message and choose to comment on the least relevant part while trying
to burn the idea as a strawman. I see that you seem to encounter some
real difficulties when discussing real issues that impact the way you
think about concepts, but that's okay, I'll chalk our difference up to
one of experience.

Regards
Jean-Francois Michaud

Bruce McFarling

unread,
May 17, 2007, 9:03:46 PM5/17/07
to
On May 17, 3:20 am, "Ed" <nos...@invalid.com> wrote:
> Is the issue about "address 0" or about xt = 0 ?
> If it's the latter then one only has to define that.

Anton Ertl's comment about the Variable NIL as the
portable terminator for a linked list suggested
to me that the xt of the Variable NIL would be a
suitable "not an xt, perform special action" flag.

So instead of

@ ?DUP IF ....

its

@ DUP ['] NIL = IF DROP ...

Bruce McFarling

unread,
May 17, 2007, 9:05:16 PM5/17/07
to
On May 17, 9:03 pm, Bruce McFarling <agil...@netscape.net> wrote:
> So instead of
> @ ?DUP IF ....
> its
> @ DUP ['] NIL = IF DROP ...

That is, instead of:

@ ?DUP 0= IF ....

Elizabeth D Rather

unread,
May 17, 2007, 9:31:37 PM5/17/07
to

Why not define NIL as a constant? If you're defining it as something
with an xt it's cleaner (both to do and to use). If you really want to
get it as an xt you could say ['] FOO CONSTANT NIL .

@ DUP NIL = IF ...

Or, if you're doing this a lot,

: NIL? ( xt -- 0 | xt xt )
DUP NIL <> IF DROP 0 ELSE DUP THEN ;

(may be faster in code).

Cheers,
Elizabeth

--
==================================================
Elizabeth D. Rather (US & Canada) 800-55-FORTH
FORTH Inc. +1 310-491-3356
5155 W. Rosecrans Ave. #1018 Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================

Simon Richard Clarkstone

unread,
May 17, 2007, 9:37:13 PM5/17/07
to
Jean-François Michaud wrote:
> Hold on a second. We're talking about source and destination
> addresses. Addresses can't be negative, by definition. Thinking about
> feeding negative numbers to the function is a non problem I would say.

Actually, MIX processors with an interrupt facility use negative
addresses in their interrupt routines. The MIX uses signed-magnitude
for its addresses, so they are *definitely* negative, and not
large-positive.

Of course, the MIX was never actually built; Knuth invented it for
examples in TAoCP, but I believe there are several emulators available.
Also, I think location 0 is addressable and doesn't do anything
special, though I am not sure and do not have a copy of the book to hand.

--
Simon Richard Clarkstone:
s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@hotmail.com
"August 9 - I just made my signature file. Its only 6 pages long.
I will have to work on it some more." -- _Diary of an AOL User_

Jonah Thomas

unread,
May 17, 2007, 9:53:25 PM5/17/07
to
Jean-François Michaud <com...@comcast.net> wrote:

> Sure, tell me what I think instead of asking me; you're obviously in a
> better position than me to let me know what I think about the subject.

You guys have established that it isn't a matter of how the computer
responds, it's a matter of how the programmer thinks about what he's
doing. You advocate the conceptual simplicity of avoiding thoughts about
negative addresses. They say they'll think about negative addresses if
they want to.

I don't think this is worth arguing about. De gustibus. We might as well
mostly avoid thinking about negative addresses until we have a suspicion
they might be good for something, and at that point try them out. When
they're worth the conceptual complexity, then use them. Otherwise don't.


On the other hand, if somebody wants to think about them when there's no
reason yet to think there's any advantage to it, better to let him than
try to argue him out of it. You know the joke about not thinking about
an elephant? I can go whole days without thinking about an elephant, but
if I try not to.... There's no point arguing with people to make them
stop thinking about something. That trick never works.

Maybe better to ask them not to bother you with negative addresses until
they're talking about some specific topic where they think that negative
addresses are actually so useful they're worth bringing in. Then you
aren't stuck arguing about the possibility that the things might be
useful.

I don't know why, but that reminded me of something from an RA Lafferty
story. An aristocratic woman is offered grapes by some fawning
underlings, and she refuses. "Don't bring me grapes, I only want fruit
that has the possibility of worms." So they rush out and come back with
some wormy apples. "Yuck. Take those away. I wanted fruit that has the
*possibility* of worms. I sure don't want fruit with the *actuality* of
worms."

Jean-François Michaud

unread,
May 17, 2007, 10:15:38 PM5/17/07
to
On May 17, 6:53 pm, Jonah Thomas <j2tho...@cavtel.net> wrote:

Amen ;-).

Regards
Jean-Francois Michaud

Jean-François Michaud

unread,
May 17, 2007, 10:23:56 PM5/17/07
to
On May 17, 6:37 pm, Simon Richard Clarkstone

<s.r.clarkst...@durham.ac.uk> wrote:
> Jean-François Michaud wrote:
> > Hold on a second. We're talking about source and destination
> > addresses. Addresses can't be negative, by definition. Thinking about
> > feeding negative numbers to the function is a non problem I would say.
>
> Actually, MIX processors with an interrupt facility use negative
> addresses in their interrupt routines. The MIX uses signed-magnitude
> for its addresses, so they are *definitely* negative, and not
> large-positive.
>
> Of course, the MIX was never actually built; Knuth invented it for
> examples in TAoCP, but I believe there are several emulators available.
> Also, I think location 0 is addressable and doesn't do anything
> special, though I am not sure and do not have a copy of the book to hand.

Interresting, I never heard of the MIX. I'll look it up :).

I should have been clearer in my intervention. What I more accurately
meant to say was that thinking about feeding negative numbers to the
word seems to be a fertile ground for potential confusion/problems/
extra rules to pay attention to whereas it seems to me to be very
straightforward and without surprises if the input boundaries of the
word are set to only pay attention to direct, all positively
represented addresses.

Regards
Jean-Francois Michaud

Bruce McFarling

unread,
May 17, 2007, 10:59:03 PM5/17/07
to
On May 16, 1:56 pm, Jerry Avins <j...@ieee.org> wrote:
> I'm not suggesting that you twist your mind, rather that negative
> addresses can have a simple interpretation if one wants to use it. Mack
> in the dark ages when the monitor ROM (and perhaps CP/M jump table) was
> located at the top of memory, with sometimes as little as 24K of RAM,
> thinking of the monitor addresses as negative provided a semblance of
> contiguity. With that convention, the useful addresses ran from -800 to
> +5FFF. If one likes it, why not?

I do not see how a "semblance of continuity" is better than
giving an address that lines up with the memory map addresses
in the documentation.


It is loading more messages.
0 new messages