Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Single-instruction stack-based computer?

2 views
Skip to first unread message

William F. Gilreath

unread,
May 11, 2001, 1:47:24 AM5/11/01
to
Hello all,

I'm currently doing research into single-instruction computers. I've ran
across articles in SIGForth, and wanted to get input on a question relating
to stack-based computers.

I'm curious to know if there is a stack-based single-instruction computer,
or has been, or even if it is remotely possible. Given the power and
versatility of stack-based computers, I would think it very possible.

Anyone run across such a computer or processor architecture using a stack?
Thanks in advance for your help!

-- William Gilreath
wi...@williamgilreath.com


cLIeNUX user

unread,
May 11, 2001, 4:37:09 AM5/11/01
to
humb...@smart.net

Lisp has 5 primitives, a Turing machine had, what, 4? There are Forth
machines with 8. How do you do one?

Rick Hohensee
www.clienux.com


>
>-- William Gilreath
>wi...@williamgilreath.com
>
>

John Passaniti

unread,
May 11, 2001, 11:55:06 AM5/11/01
to

"cLIeNUX user" <r@your_host.com> wrote in message
news:tfn95le...@corp.supernews.com...

> Lisp has 5 primitives, a Turing machine had, what, 4? There
> are Forth machines with 8. How do you do one?

One way is to make the programmer's life hell and give them a single
instruction from which to synthesize everything else.

http://www.cas.mcmaster.ca/~cs1md3/topics/topics-sic.html is typical of
this. Long ago, someone who was probably really bored figured out that a
"subtract and branch if negative" instruction could (along with a
memory-mapped I/O) be used to synthesize everything. And ever since,
Computer Science students have had to suffer this puzzle.

But when people talk about useful designs for a single instruction set
computer, they usually mean something else. In the papers I have read on
the subject, the idea boils down to tying execution units to addresses in
the computer's address space. Then the only instruction becomes a memory
move instruction. The *address* you are moving from and/or to ends up being
the instruction.

Say I have a working register at address 0 and an adder hooked up that
triggers when there is a write to address 1. If I want to add 2 and 3, I
would do something like this:

move 2 to address 0 ; store 2 in the working register
move 3 to address 1 ; add three

I now have 5 in address 0.

Now extend the idea. Tie subtraction to address 2. Hook up a MAC unit to
address 3 for DSP fun. Need a branch if zero? Write the address you want
to jump to address 4. If the working register is zero, off you go. Want
subroutine calls? Tie a return stack to location 5. Write the address you
want to go to there. When you want to come back, read back from address 5
and store that in the address holding the program counter. The key is that
whatever you want to do you tie to addresses that you read or write.

I don't know if anyone has turned this idea into a commercial reality. The
advantage is an extremely simple and direct design. The disadvantage is you
are limited by memory bandwidth since everything is driven by how fast you
can grab addresses for the move instruction.

cLIeNUX user

unread,
May 11, 2001, 12:34:00 PM5/11/01
to
humb...@smart.net

John, are you the same John Passani? Your post quality has gone through the
ceiling :o)

The former sounds like one composite instruction, the later sounds like
memory-mapped instructions. Jan Coombs and I were just discussing something
similar for Forth, a Subroutine Machine, here in c.l.f. It might reduce
the cost of address-threaded code.

Rick Hohensee
www.clienux.com

William Tanksley

unread,
May 11, 2001, 6:35:43 PM5/11/01
to
On Fri, 11 May 2001 05:47:24 GMT, William F. Gilreath wrote:
>Hello all,
> I'm currently doing research into single-instruction computers. I've ran
>across articles in SIGForth, and wanted to get input on a question relating
>to stack-based computers.

> I'm curious to know if there is a stack-based single-instruction computer,
> or has been, or even if it is remotely possible. Given the power and
> versatility of stack-based computers, I would think it very possible.

Most people who design single-instruction computers are cheating. The
smallest number of instructions you could possibly do anything with is
two, not one. Most of their computers use vastly more than two
instructions; they cheat by pretending that their 'registers' are not the
same things as their 'instructions'.

To debunk them, just count the number of bits in their instruction word.
You'll probably find that it's more than zero bits (which is what you'd
need to encode for a 1-instruction machine).

Stack machines have typical instruction sizes of about 5 bits; that's
enough to encode 2^5= 32 instructions. Instruction sizes for the
so-called single instruction machines vary, but is usually at least 8 bits
(256 instructions), or more commonly 16 bits (64K instructions).

>Anyone run across such a computer or processor architecture using a stack?
>Thanks in advance for your help!

Stack machines don't usually cheat by pretending that register accesses
are not part of the instructions.

However, if you don't mind cheating, I implemented _most_ of my stack
machine as a single-instruction machine; it had very little decode
circuitry, so most of the time the instruction was only a register address
which was written to the top-of-stack. If I had been intending to make a
single-instruction machine, I could have.

>-- William Gilreath

--
-William "Billy" Tanksley

Bernd Paysan

unread,
May 12, 2001, 4:49:26 PM5/12/01
to
William Tanksley wrote:
> Most people who design single-instruction computers are cheating. The
> smallest number of instructions you could possibly do anything with is
> two, not one. Most of their computers use vastly more than two
> instructions; they cheat by pretending that their 'registers' are not the
> same things as their 'instructions'.

No, you really can define a single-instruction computer that doesn't
have registers, only memory. You use the "move" as single instruction,
but the trick is that your move doesn't move a complete address, but
only a subword. E.g. you have 12 bit addresses (move is always two
addresses, first from, then to), but you move only 4 bits at a time.

If you want to add two 4 bit numbers stored in memory, you define an
addition table (256 entries), and your code looks like

move a, field1
move b, field2
move { add-table, field1, field2 }, result

The only "cheat" here is that you certainly still need a memory-mapped
PC to do branching. The main difficulty with this is that you can't do
an atomic branch. The trick is to have a "branch table", i.e. you start
with the most signifcant nibble, which takes you to the branch table.
The instructions there fill the lower parts of the PC, and jump out by
moving a new value to the top of the PC. Your jump target still is
somewhat limited, but this doesn't really matter that much.

Why don't take single-instruction computers off? Well, you can use
Occam's razor to shave off all the unnecessary fluffy stuff, but if you
cut too deep, you do severe damage.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

William Tanksley

unread,
May 12, 2001, 7:41:09 PM5/12/01
to
On Sat, 12 May 2001 22:49:26 +0200, Bernd Paysan wrote:
>William Tanksley wrote:
>> Most people who design single-instruction computers are cheating. The
>> smallest number of instructions you could possibly do anything with is
>> two, not one. Most of their computers use vastly more than two
>> instructions; they cheat by pretending that their 'registers' are not the
>> same things as their 'instructions'.

>No, you really can define a single-instruction computer that doesn't
>have registers, only memory. You use the "move" as single instruction,
>but the trick is that your move doesn't move a complete address, but
>only a subword. E.g. you have 12 bit addresses (move is always two
>addresses, first from, then to), but you move only 4 bits at a time.

Look at that -- your instruction takes up 24 bits, with all bits being
used. Looks like a 2^24-instruction computer to me!

>If you want to add two 4 bit numbers stored in memory, you define an
>addition table (256 entries), and your code looks like
> move a, field1
> move b, field2
> move { add-table, field1, field2 }, result

Ah, I see that you're also allowing some kind of indirect addressing (I
can't guess what kind; there are three addresses mentioned in there, and I
don't know why). That's another bit added to the instruction size, and
possibly 2 more 12-bit address fields.

This single-instruction computer now has more instructions than a 32-bit
RISC machine (2^32 instructions with only a few levels of decoding).

>The only "cheat" here is that you certainly still need a memory-mapped
>PC to do branching.

You're also cheating by not counting all of the bits in your instruction
format.

>Why don't take single-instruction computers off? Well, you can use
>Occam's razor to shave off all the unnecessary fluffy stuff, but if you
>cut too deep, you do severe damage.

Not really -- after a while you just realise at least subconciously that
you're not accomplishing anything at all; you're only pretending to have a
single instruction, and the pretence is costing you severely.

>Bernd Paysan

--
-William "Billy" Tanksley

Jerry Avins

unread,
May 12, 2001, 9:02:20 PM5/12/01
to

That's OK. We're only pretending to have a serious discussion. .. JA


>
> >Bernd Paysan
>
> --
> -William "Billy" Tanksley


--
Engineering is the art of making what you want from things you can get.
-----------------------------------------------------------------------

Bernd Paysan

unread,
May 13, 2001, 5:26:47 PM5/13/01
to
William Tanksley wrote:
> >If you want to add two 4 bit numbers stored in memory, you define an
> >addition table (256 entries), and your code looks like
> > move a, field1
> > move b, field2
> > move { add-table, field1, field2 }, result
>
> Ah, I see that you're also allowing some kind of indirect addressing (I
> can't guess what kind; there are three addresses mentioned in there, and I
> don't know why). That's another bit added to the instruction size, and
> possibly 2 more 12-bit address fields.

No, the concatenation is just that a 12-bit address is made up of three
four-bit fields, and since you can write into the instruction space, you
can modify those addresses. If you want it in numbers, you take

0: a_high
1: a_mid
2: a_low
3: 0
4: 0
5: D
6: b_high
7: b_mid
8: b_low
9: 0
A: 0
B: E
C: add-table
D: { whatever was moved from 5 here }
E: { whatever was moved from B here }
F: result_high
10: result_mid
11: result_low

You get indirect addressing the "von Neumann" way (by patching your
opcodes in flight).

> You're also cheating by not counting all of the bits in your instruction
> format.

Only decoded parts of an instruction count as different instructions.
Ok, the bits in the instruction are decoded by an address decoder, but
that's not done in the CPU. You can implement this machine using 16
latches (12 for the addresses, 4 for the data), and some flip-flops and
a few gates for the latchup and PC increment logic (the PC is stored
back in SRAM, it shares the latches with the move address logic), i.e. a
few 74xx chips or a GAL, a clock, and a SRAM is sufficient. The GAL does
not decode anything (only steps through the internal states), and that's
why the machine is a "single-instruction machine".

I'm not sure how many instructions you need as a minimum if you count
every bit in the instruction word. Perhaps the following set is
sufficient:

@
!
NAND
2*
1+ (or 1OR)
DUP (or 0)
GOTO (also called PC!)

That needs 3 bits for encoding, no cheating necessary. You use DUP or 0
followed by enough 2* and 1+ or 1OR to generate constants - therefore
this instruction is necessary. Now the next part of the puzzle is how
deep does your stack have to be. I think 2 entries are sufficient (you
can't have less, anyway), because whenever you have something computed,
you can generate an address and store it there.

0 new messages