Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

old microcode listings

173 views
Skip to first unread message

Peter "Firefly" Lund

unread,
Dec 22, 2006, 3:49:15 AM12/22/06
to
Does anybody know where I can find microcode listings for old Data
General, IBM S/360, or Norsk Data machines?

Microcode for the T-11 ("Tiny") PDP-11 microcontroller would also be
greatly appreciated.

-Peter

j...@beniston.com

unread,
Dec 22, 2006, 11:42:13 AM12/22/06
to

Peter "Firefly" Lund wrote:
> Does anybody know where I can find microcode listings for old Data
> General, IBM S/360, or Norsk Data machines?

Not much on this Christmas?

Jon

Jeffrey Dutky

unread,
Dec 27, 2006, 3:28:51 AM12/27/06
to

I've got some training materials for microprogramming of Varian
70-series, it's got a few example microprograms and a description of
the microprogramming environment. If any of this would be of use I
could scan and upload it somewhere or email it to you in chunks.

- Jeff Dutky

Peter "Firefly" Lund

unread,
Dec 27, 2006, 6:45:25 PM12/27/06
to
On Wed, 27 Dec 2006, Jeffrey Dutky wrote:

> I've got some training materials for microprogramming of Varian
> 70-series, it's got a few example microprograms and a description of
> the microprogramming environment. If any of this would be of use I
> could scan and upload it somewhere or email it to you in chunks.

Yes, please! :)

(I'm more interested in Norsk Data and T-11 but this looks like a good
find, too)

-Peter

ChrisQuayle

unread,
Dec 29, 2006, 10:42:45 AM12/29/06
to
Peter "Firefly" Lund wrote:

>
> (I'm more interested in Norsk Data and T-11 but this looks like a good
> find, too)
>
> -Peter

If you could make do with more general pdp11 stuff, I think the early
pdp11/05 field maintenance print set had listings of the microcode prom
contents.

The T11 was the micro used on the Falcon embedded qbus board right ? -
spent a couple of years writing macro 11 for that board and may still
have the ug somewhere, though not sure if it includes microcode
listings. bitsavers.org may be a good place to start for any of the
older mini info.

So what's so interesting about the T11 ?. A more modern embedded
equivalent would be the Texas msp430. The msp is a risc design under the
skin, but the registers, instruction set and addressing modes look like
it's been cribbed almost direct from the pdp architectural model. Dejavu
indeed...

Chris

Peter "Firefly" Lund

unread,
Dec 30, 2006, 8:27:25 PM12/30/06
to
On Fri, 29 Dec 2006, ChrisQuayle wrote:

> If you could make do with more general pdp11 stuff, I think the early
> pdp11/05 field maintenance print set had listings of the microcode prom
> contents.

Hmmm, might be worth a shot.

> The T11 was the micro used on the Falcon embedded qbus board right ? - spent

Yes, and in the Paperboy video game.

> a couple of years writing macro 11 for that board and may still have the ug
> somewhere, though not sure if it includes microcode listings. bitsavers.org
> may be a good place to start for any of the older mini info.

Nope, it doesn't (I got my copy from bitsavers).

> So what's so interesting about the T11 ?. A more modern embedded equivalent

That it's a very tight PDP-11 implementation, the only single-chip PDP-11
Digital ever made.

According to:

http://simh.trailing-edge.com/semi/t11.html

it 17,000 transistor "sites", which according to another source I found
translated to about 10,000 actual transistors. This includes registers
and microcode.

> would be the Texas msp430. The msp is a risc design under the skin, but the
> registers, instruction set and addressing modes look like it's been cribbed
> almost direct from the pdp architectural model. Dejavu indeed...

Noted, thanks.

-Peter

ChrisQuayle

unread,
Dec 31, 2006, 7:45:31 AM12/31/06
to
Peter "Firefly" Lund wrote:
>
>> If you could make do with more general pdp11 stuff, I think the early
>> pdp11/05 field maintenance print set had listings of the microcode
>> prom contents.
>
>
> Hmmm, might be worth a shot.
>

> -Peter

On a more general note, so much stuff from science is becoming lost,
difficult to find and/or expensive to access. The T11 microcode is just
one example. Were all the docs just binned when dec or later owners
closed the labs, or has some individual or organisation somewhere still
got the docs ?. Having a good historical perpective on a subject can
save a lot of time in terms of wheel reinvention. The problem is I guess
that companies like dec were so prolific in terms of innovation that it
may be impossible to store, let alone catalog and digitise all of it.

As someone who finds the history of science more than just interesting,
I think we should be taking far better care of our scientific heritage
and making it more readily available at low cost or free for
researchers. I know there are many organisations that digitise and make
it available, but it can get quite expensive if you need several papers
and often you need to read a paper to find out if it's relevant. One
excellent free sources is the Nasa Tech reports server - has stuff
dating back to the 60's, a period when there was a real push in terms of
research and innovation...

Chris

Peter "Firefly" Lund

unread,
Dec 31, 2006, 2:27:50 PM12/31/06
to
On Sun, 31 Dec 2006, Peter "Firefly" Lund wrote:

>> would be the Texas msp430. The msp is a risc design under the skin, but
>> the registers, instruction set and addressing modes look like it's been
>> cribbed almost direct from the pdp architectural model. Dejavu indeed...
>
> Noted, thanks.

It does look a bit like a PDP-11 with more registers and fewer addressing
modes and with both more regularity in the registers (even the flags are
in a general-purpose register vs. just the PC and SP on the PDP-11) and
less regularity (R3 is not there -- if used as source it will generate one
of a few constants based on the addressing mode! Likewise, R2, the flags
register, will generate constants in some addressing modes).

-Peter

ken...@cix.compulink.co.uk

unread,
Jan 1, 2007, 5:59:44 AM1/1/07
to
In article <LhOlh.7019$Wy6...@newsfe1-win.ntli.net>,
nos...@devnul.co.uk (ChrisQuayle) wrote:

> On a more general note, so much stuff from science is
> becoming lost, difficult to find and/or expensive to access.

It is not just science that applies to. About the only
organisations that attempt to keep all documents as a matter of
record are Governments. In industrial history (a hobby) you are
largely dependent on secondary sources written before
production and technical records were lost or thrown away.

Ken Young

ChrisQuayle

unread,
Jan 3, 2007, 11:11:03 AM1/3/07
to
Peter "Firefly" Lund wrote:

>
> It does look a bit like a PDP-11 with more registers and fewer
> addressing modes and with both more regularity in the registers (even
> the flags are in a general-purpose register vs. just the PC and SP on
> the PDP-11) and less regularity (R3 is not there -- if used as source it
> will generate one of a few constants based on the addressing mode!
> Likewise, R2, the flags register, will generate constants in some
> addressing modes).
>
> -Peter

Agreed, there is quite a bit of variance, but it's easy to see which
architecture they were inspired by. Not bad for a system that was
designed in the late 60's. I think the very first pdp used hard wired
logic, instead of microcode, but by the 11/05, the boards were full of
bipolar proms and (iirc) Texas 74181 bit slice.

The saying used to be about the pdp that any addressing mode works with
any instruction and / or register, so long as it sounds sensible and it
is more or less true as well. After programming various intel and other
micros in assembler, the pdp seemed like pure luxury :-)...

Chris

Del Cecchi

unread,
Jan 3, 2007, 12:12:42 PM1/3/07
to
74181 was a long long way from a bit slice. It was a 4 bit alu.

--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”

Peter "Firefly" Lund

unread,
Jan 3, 2007, 12:35:54 PM1/3/07
to
On Wed, 3 Jan 2007, ChrisQuayle wrote:

> Agreed, there is quite a bit of variance, but it's easy to see which
> architecture they were inspired by. Not bad for a system that was designed in
> the late 60's.

No, not bad at all.

It's just that thinking in those addressing modes doesn't seem at all
natural to me.

Hopefully, I'll learn. I intend to build a small (10cm x 10cm) PDP-11
clone in the beginning of February as part of my preparations for the
VAX... :)

> I think the very first pdp used hard wired logic, instead of

Yep.

> microcode, but by the 11/05, the boards were full of bipolar proms and (iirc)
> Texas 74181 bit slice.

Didn't they use AMD29xx?

> The saying used to be about the pdp that any addressing mode works with any
> instruction and / or register, so long as it sounds sensible and it is more
> or less true as well. After programming various intel and other micros in
> assembler, the pdp seemed like pure luxury :-)...

From the 386 and on, the instruction set had become quite regular.

-Peter

ChrisQuayle

unread,
Jan 3, 2007, 5:18:01 PM1/3/07
to

Yes it was, but probably the best that could be done at the time in
commodity devices - ok, bit slice in intent ?. I think the more complex
AMD 29xx series came quite a bit later than 1972.

Chris

ChrisQuayle

unread,
Jan 3, 2007, 5:45:51 PM1/3/07
to
Peter "Firefly" Lund wrote:
> It's just that thinking in those addressing modes doesn't seem at all
> natural to me.
>

Ah, the power of indirection. Probably me take time to get back into the
flow as well, but haven't worked on any cpu since that is so clean from
an assembler programming point of view. Of course, modern compilers hide
all that, but at the time, much more systems code was written in assembler.

>
> Hopefully, I'll learn. I intend to build a small (10cm x 10cm) PDP-11
> clone in the beginning of February as part of my preparations for the
> VAX... :)
>

If it's a good enough clone, you could even run (heaven forbid ?) all
the old dec os's - rt11, rsx as well as several early unices. Will you
open source it to opencores.org, or what ?. My hardware design stops at
ttl really, but a single chip pdp clone in a gate array on an evaluation
board would be almost irresistable :-). I still have a Whitesmiths pc
dos to pdp C cross compiler somewhere, though don't remember how good
the generated code was.

>
>> microcode, but by the 11/05, the boards were full of bipolar proms and
>> (iirc) Texas 74181 bit slice.
>
>
> Didn't they use AMD29xx?

No, definately 74181. The 11/05 print set does have the microcode rom
listings btw + cross reference table, so perhaps dec had some sort of
microcode assembler. If you have lots of time, you could relate these to
the schematics to reverse engineer the design.

>
>> The saying used to be about the pdp that any addressing mode works
>> with any instruction and / or register, so long as it sounds sensible
>> and it is more or less true as well. After programming various intel
>> and other micros in assembler, the pdp seemed like pure luxury :-)...
>
>
> From the 386 and on, the instruction set had become quite regular.
>
> -Peter

Some might say that the whole x86 architecture was and is regularly bad
:-)...

Chris

Peter "Firefly" Lund

unread,
Jan 3, 2007, 7:11:40 PM1/3/07
to
On Wed, 3 Jan 2007, ChrisQuayle wrote:

> Ah, the power of indirection. Probably me take time to get back into the flow
> as well, but haven't worked on any cpu since that is so clean from an
> assembler programming point of view. Of course, modern compilers hide all
> that, but at the time, much more systems code was written in assembler.

You can write systems code in anything.

Think about the amount of Z80 and 6502/6510 machine code people wrote in
the eighties. Yes, a lot of it didn't even use assembler.

> If it's a good enough clone, you could even run (heaven forbid ?) all the old
> dec os's - rt11, rsx as well as several early unices.

I'm not going to implement the bank switching (sorry, "virtual memory"),
the dual address spaces, or the two or three different protection levels.

I might not even implement the simple stack-oriented floating-point
instructions.

(The PDP-11 was not so much an architecture as a family of architectures
with various more or less compatible extensions. For example, there were
at least three different kinds of floating-point acceleration.)

> Will you open source it to opencores.org, or what ?

Sure, why not... it will consist of a simple PCB layout and some
microcode. There might be a verilog simulation but it will be boring.

> My hardware design stops at ttl really, but a single chip pdp clone in a

I can handle TTL and other digital things just fine on paper but what
kills me is all the analog there really is in digital electronics.

In order to get my VAX built, I will have to get much more accustomed to
actual electronics, messy as it is.

That's actually why I'm going to do a PDP-11 along the way.

> No, definately 74181. The 11/05 print set does have the microcode rom
> listings btw + cross reference table, so perhaps dec had some sort of
> microcode assembler. If you have lots of time, you could relate these to the

They did. It seems to have used extreme amounts of macro substitution for
most of its work, at least when they had gotten to the VAX.

Look at the SimH pages on trailing edge (google for "supnik+simh").

> Some might say that the whole x86 architecture was and is regularly bad
> :-)...

No, it never was all that bad. It just looks a lot worse than it actually
is. Ok, the conditional branches had some stupid mnemonics, that one
actually slowed me down a lot. The PDP-11 conditional branches had
marginally better names.

-Peter

ChrisQuayle

unread,
Jan 4, 2007, 10:29:32 AM1/4/07
to
Peter "Firefly" Lund wrote:

> You can write systems code in anything.
>
> Think about the amount of Z80 and 6502/6510 machine code people wrote in
> the eighties. Yes, a lot of it didn't even use assembler.

Halcyon days ?. Remember programming manually in hex, 2 pass, with
branch targets entered on the second pass. Not sure that I would want to
return to those days though. High level languages may encourage less
thought about machine internals, but they do get the job done faster.

> I'm not going to implement the bank switching (sorry, "virtual memory"),
> the dual address spaces, or the two or three different protection levels.
>
> I might not even implement the simple stack-oriented floating-point
> instructions.
>

It does vary a lot, but a bare bones implementation would be quite
usefull for embedded work.

>
> I can handle TTL and other digital things just fine on paper but what
> kills me is all the analog there really is in digital electronics.
>

I guess the analogue is in the transition time, but noise, ringing,
setup and hold times, delay, race conditions are likely to be more
problematical. If you don't already have them, it may be worth looking
out some of the early 70's and 80's books on finite state machine
design, as well as classics like the Amd book on bit slice
microprocessor design. Mick and Brick, iirc. Never got to design a bit
slice machine, but some of the books were quite inspiring.

>
>
> No, it never was all that bad. It just looks a lot worse than it
> actually is. Ok, the conditional branches had some stupid mnemonics,
> that one actually slowed me down a lot. The PDP-11 conditional branches
> had marginally better names.
>
> -Peter

Here we differ - compared to other architectures, x86 was hard work at
both hardware design and assembler level. The cleanest from Intel was
the pre x86 8080, but nearly everything since smells of camel and looks
overcomplicated. To be fair, part of this is the need to maintain
backwards compatability, but once the pc arrived, there was just a small
window of opportunity to start again with a clean sheet of paper and
design for the future, yet they missed the boat completely. There were
other valid candidates at the time, quite a few competing 16/32 bit
micros in the 80's. Nat semi had the 16032 series, Zilog Z8000, Texas
had the micro'd version of the TI900, but the 68k and its descendants
are the only real survivors still in volume production, that is, not
riding the pc wave. Why ?, at least partly because they look and feel
clean design - easy to program and design hardware for. If you want to
communicate ideas, make the languages and interfaces easy to learn etc.
The emphasis may have shifted away from bare metal, but someone has to
write the compilers. To me, elegant, clean design starts at the
foundations and is not something that can be glued on later at higher
levels...

Chris

Peter "Firefly" Lund

unread,
Jan 4, 2007, 2:57:23 PM1/4/07
to
On Thu, 4 Jan 2007, ChrisQuayle wrote:

> Halcyon days ?.

No, not really. I'm just saying that people can write systems code in
anything. People did.

> I guess the analogue is in the transition time, but noise, ringing, setup and
> hold times, delay, race conditions are likely to be more problematical. If

Timing and race conditions are easy. Come on, race conditions, I mean,
really. Races are not hard, okay?

No, it's things like terminating resistors and decoupling capacitors that
I need to get comfortable with. And maybe parasitics. And making sure
to tie /every/ unused pin to GND or VCC through a resistor. And making
sure GND never bounces. And that VCC is always stable. Probably also
getting the reset right in combination with the voltage ramping at power
on.

> you don't already have them, it may be worth looking out some of the early
> 70's and 80's books on finite state machine design, as well as classics like

Finite state machines are not hard, as long as we can stay digital.

> Here we differ - compared to other architectures, x86 was hard work at both
> hardware design and assembler level. The cleanest from Intel was the pre x86
> 8080,

What?!?! How many different voltages did you need to supply to it? And
in what order did they have to turned on and off? And you are telling me
that it was easier to interface to it than the 8086/8088?

Sure, the 8086/8088 multiplexed address and data on the same pins. , which
the 8080 didn't. A few latch chips are enough to take care of that.
That's /easy/.

Lack of built-in memory refresh is a bigger problem for a small machine of
that time.

The early 6502 Apple machines used the screen access to, as a side effect,
get the memory refreshed. That required rearranging the address bits to
the DRAM chips a bit but it was otherwise not expensive to build. As far
as I know, no other 6502-machine did it like that so it can't have been
too obvious. Some say it didn't work so well but what do I know, I never
had one.

Usually one had to build a special refresh circuit. The early PC did that
with a timer channel and DMA channel.

The Z80 CP/M machines and home computers avoided the problem entirely by
using a CPU with built-in memory refresh support.

The CBM-64 solved it by putting the refresh circuit into a corner of one
of its custom chips.

Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
what I prefer?

It also had better 16-bit support than the 8080, which /also/ had the
somewhat stiff bindings between some registers and some instructions.

The 8086 had far, far better support for parameter passing and local
variables on the stack.

> the boat completely. There were other valid candidates at the time, quite a
> few competing 16/32 bit micros in the 80's. Nat semi had the 16032 series,
> Zilog Z8000, Texas had the micro'd version of the TI900, but the 68k and its

Look at how they expanded beyond 16 address bits. The 68K did it cleanly,
the 8086/8088 did it almost as well. The only problem was big arrays,
really, and the 8086/8088 mechanism was a lot cheaper than the 68K's PLUS
it was backwards-compatible. Pretty well done for an emergency project.

Intel's address extension technique for 8086/8088 was /so/ much better
than Zilog's for the Z8000. Zilog clearly can't have had much input from
anybody who actually programmed. Their scheme disappointed me when I
first read about it as a teenager. The NS16032/32016, on the other hand,
is a CPU I know very little about. It seems to have been slow and buggy
but otherwise nice.

I don't know enough about TMS9900 and TMS99000 to have an opinion other
than its design led to good interrupt response times and must have/would
have become a pain when CPUs got faster faster than RAM chips did (almost
all "registers" were really just a small workspace in RAM pointed to by a
real register). Actual memory use, such as array indexing, was a bit
slow, wasn't it? And 16-bits only?

> compilers. To me, elegant, clean design starts at the foundations and is not
> something that can be glued on later at higher levels...

Really?

Can't say I really agree. I think there is much to be said for the
incremental approach. Sometimes it produces something elegant, sometimes
it doesn't, but usually, it produces something that's useful.

-Peter

jacko

unread,
Jan 4, 2007, 3:50:52 PM1/4/07
to

Peter "Firefly" Lund wrote:
> On Thu, 4 Jan 2007, ChrisQuayle wrote:
>
> > Halcyon days ?.
>
> No, not really. I'm just saying that people can write systems code in
> anything. People did.

and still will

> > I guess the analogue is in the transition time, but noise, ringing, setup and
> > hold times, delay, race conditions are likely to be more problematical. If
>
> Timing and race conditions are easy. Come on, race conditions, I mean,
> really. Races are not hard, okay?

i do wonder how quartus II fpga compilier handles (or optionally
handles) this automatically. A grey code variable state transition
algorithm

> No, it's things like terminating resistors and decoupling capacitors that
> I need to get comfortable with. And maybe parasitics. And making sure
> to tie /every/ unused pin to GND or VCC through a resistor. And making
> sure GND never bounces. And that VCC is always stable. Probably also
> getting the reset right in combination with the voltage ramping at power
> on.

a development board pre made would be a good prospect. let any analog
designer sort out their own board layout if needed.

> > you don't already have them, it may be worth looking out some of the early
> > 70's and 80's books on finite state machine design, as well as classics like
>
> Finite state machines are not hard, as long as we can stay digital.

the tools to automatically enter/draw these are expensive. and a
pipeline significantly can add some complexity. (must not forget the
delays)

> > Here we differ - compared to other architectures, x86 was hard work at both
> > hardware design and assembler level. The cleanest from Intel was the pre x86
> > 8080,
>
> What?!?! How many different voltages did you need to supply to it? And
> in what order did they have to turned on and off? And you are telling me
> that it was easier to interface to it than the 8086/8088?

long live the simplyfied 68K

> Sure, the 8086/8088 multiplexed address and data on the same pins. , which
> the 8080 didn't. A few latch chips are enough to take care of that.
> That's /easy/.
>
> Lack of built-in memory refresh is a bigger problem for a small machine of
> that time.
>
> The early 6502 Apple machines used the screen access to, as a side effect,
> get the memory refreshed. That required rearranging the address bits to
> the DRAM chips a bit but it was otherwise not expensive to build. As far
> as I know, no other 6502-machine did it like that so it can't have been
> too obvious. Some say it didn't work so well but what do I know, I never
> had one.

nice idea.

> Usually one had to build a special refresh circuit. The early PC did that
> with a timer channel and DMA channel.
>
> The Z80 CP/M machines and home computers avoided the problem entirely by
> using a CPU with built-in memory refresh support.
>
> The CBM-64 solved it by putting the refresh circuit into a corner of one
> of its custom chips.
>
> Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
> the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
> what I prefer?

the segment register method?

> It also had better 16-bit support than the 8080, which /also/ had the
> somewhat stiff bindings between some registers and some instructions.

un avoidable with short opcodes.

> The 8086 had far, far better support for parameter passing and local
> variables on the stack.
>
> > the boat completely. There were other valid candidates at the time, quite a
> > few competing 16/32 bit micros in the 80's. Nat semi had the 16032 series,
> > Zilog Z8000, Texas had the micro'd version of the TI900, but the 68k and its
>
> Look at how they expanded beyond 16 address bits. The 68K did it cleanly,
> the 8086/8088 did it almost as well. The only problem was big arrays,
> really, and the 8086/8088 mechanism was a lot cheaper than the 68K's PLUS
> it was backwards-compatible. Pretty well done for an emergency project.
>
> Intel's address extension technique for 8086/8088 was /so/ much better
> than Zilog's for the Z8000. Zilog clearly can't have had much input from
> anybody who actually programmed. Their scheme disappointed me when I
> first read about it as a teenager. The NS16032/32016, on the other hand,
> is a CPU I know very little about. It seems to have been slow and buggy
> but otherwise nice.
>
> I don't know enough about TMS9900 and TMS99000 to have an opinion other
> than its design led to good interrupt response times and must have/would
> have become a pain when CPUs got faster faster than RAM chips did (almost
> all "registers" were really just a small workspace in RAM pointed to by a
> real register). Actual memory use, such as array indexing, was a bit
> slow, wasn't it? And 16-bits only?

cache eliminates this problem. a cache line as a work space? the 16 bit
"problem?" would suit embedded world, and doubling all word sizes would
be a suitable 32 bit upgrade. the extra 16 bit of opcode space could
easily cover fpu and other subsections in a 32 bit processor. this also
works good for 64 bit expansion too.

if your thinking "overflow is not backwards compatable for branch
counting" then your assembly skills need some adaptation.

> > compilers. To me, elegant, clean design starts at the foundations and is not
> > something that can be glued on later at higher levels...
>
> Really?
>
> Can't say I really agree. I think there is much to be said for the
> incremental approach. Sometimes it produces something elegant, sometimes
> it doesn't, but usually, it produces something that's useful.

i agree, now all i need is a C compilier template where i can fill in
the generated code for xor, and, load and sum. so that all operations
are defined high level in terms of these, and a section of code to load
and save variables from the stack.

cheers

http://indi.microfpga.com

Peter "Firefly" Lund

unread,
Jan 4, 2007, 4:41:23 PM1/4/07
to
On Thu, 4 Jan 2007, jacko wrote:

>> No, not really. I'm just saying that people can write systems code in
>> anything. People did.
>
> and still will

Yes ;)

But only for fun, these days. The amount of assembler in a kernel or a
run-time library is small now.

> a development board pre made would be a good prospect. let any analog

That doesn't get me a pipelined VAX built LSTTL chips.

>> Finite state machines are not hard, as long as we can stay digital.
>
> the tools to automatically enter/draw these are expensive.

No. Icarus verilog is free, Xilinx ISE is gratis. Switch/case statements
can be written in whatever. I can also get graphviz to draw the graphs
for me.

> pipeline significantly can add some complexity. (must not forget the
> delays)

Only if you want it to go really fast or you have loops in the
data/control flow -- or you actually want to build the electronics
yourself, in which case there be Analogue Monfters.

Or if you are size-constrained.

>>> Here we differ - compared to other architectures, x86 was hard work at both
>>> hardware design and assembler level. The cleanest from Intel was the pre x86
>>> 8080,
>>
>> What?!?! How many different voltages did you need to supply to it? And
>> in what order did they have to turned on and off? And you are telling me
>> that it was easier to interface to it than the 8086/8088?
>
> long live the simplyfied 68K

Which had an asynchronous protocol for memory and I/O access. Happilly,
they didn't implement it fully, so one could apparently get away with just
ground /DTACK instead of implementing Motorola's somewhat complicated
scheme. Of course, if you have a custom chip or enough PLAs, then it
doesn't matter.

Please google "DTACK grounded" and read the first few paragraphs of the
first newsletter.

>> Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
>> the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
>> what I prefer?
>
> the segment register method?

Yep. That meant you could have "large" (up to 64K) arrays begin (almost)
anywhere in memory without having to worry about segment crossing. Also,
the bank switches had to programmed explicitly, outside of the
instructions that loaded from/stored to the memory, whereas in the 8086 it
was implicit in the load/store (as one of the four segment registers).

The Z8000 seems to have gotten the implied segment thing right (it used
register pairs where the upper register contained the segment number) but
not the thing about segment crossings and array placements. Z8000
segments were entire non-overlapping blocks of 64K, numbered from 0 to
127.

8086/8088 got both right. The lower maximum memory size vs the Z8000 (1M
vs 8M) didn't matter nearly as much. The Z8000 saved an addition in the
memory generation path but that was probably a bad decision.

>> It also had better 16-bit support than the 8080, which /also/ had the
>> somewhat stiff bindings between some registers and some instructions.
>
> un avoidable with short opcodes.

My point was that the 8086/8088 didn't introduce that stiffness; it was
already there in the 8080.


[TMS 9900, no "real" registers]

> cache eliminates this problem. a cache line as a work space?

I am not sure that would have been enough -- but neither you or I have
studied the instruction set and its addressing modes well enough to say.

> the 16 bit "problem?" would suit embedded world, and doubling all word
> sizes would be a suitable 32 bit upgrade.

Would it? How much extra performance would it gain and how much ease of
addressability? At what cost in terms of wider data paths, wider buses,
extra transistors, lower instruction densities?

I don't know. I'm not so sure. Chris' lament was that there were other
CPUs which had broken the 16-bit addressing barrier and done it better
than the 8086/8088. As far as I can tell the TMS 9900 hadn't broken it
but maybe the TMS 99000 did? My counterpoint is that the 8086/8088
actually did it in a quite nice but vastly underappreciated way.

> if your thinking "overflow is not backwards compatable for branch
> counting" then your assembly skills need some adaptation.

My English reading comprehension skills apparently do.

-Peter

Del Cecchi

unread,
Jan 4, 2007, 7:40:34 PM1/4/07
to

"Peter "Firefly" Lund" <fir...@diku.dk> wrote in message
news:Pine.LNX.4.61.07...@ask.diku.dk...

You are going to build a VAX out of LSTTL just like back in the day?

del


Eric P.

unread,
Jan 4, 2007, 9:54:46 PM1/4/07
to
Peter \"Firefly\" Lund wrote:
>
> Which had an asynchronous protocol for memory and I/O access. Happilly,
> they didn't implement it fully, so one could apparently get away with just
> ground /DTACK instead of implementing Motorola's somewhat complicated
> scheme. Of course, if you have a custom chip or enough PLAs, then it
> doesn't matter.
>
> Please google "DTACK grounded" and read the first few paragraphs of the
> first newsletter.

Now it has been 30 years, but as far as I remember all microprocessor
bus protocols had some form of NOT_READY signal to handle slow devices.
If you know the device is fast enough you just ground the signal.
How is this different?

Eric

Eric Smith

unread,
Jan 5, 2007, 2:14:50 AM1/5/07
to
"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:
> Does anybody know where I can find microcode listings for old Data
> General, IBM S/360, or Norsk Data machines?

Microcode listings for the System/360 and System/370 seem to be really
hard to find. Someone does have a 360/30 microcode listing, but
unfortunately is not willing to make it publicly available. If anyone
does have microcode for any of those, and is willing to make it
available, please speak up!

The widest architecturally compatible families of processors for which
microcode listings are readily available seem to be the PDP-10,
PDP-11, and VAX.

PDP-10: KL10 processor - many microcode versions available
KS10 processor - several versions available
Minnow processor (never entered production)

PDP-11: PDP-11/04
PDP-11/05
PDP-11/34
PDP-11/40
PDP-11/44
PDP-11/45
PDP-11/60
PDP-11/70
DCJ11 chip set (two chips on a ceramic hybrid)

VAX: VAX-11/780
MicroVAX II
CVAX
Rigel
Mariah
Raven
NVAX

> Microcode for the T-11 ("Tiny") PDP-11 microcontroller would also be
> greatly appreciated.

It would be interesting to see listings of the T11, F11, and 11/03
microcode, but no one seems to have them. The F11 CIS option
(Commerical Instruction Set, for Dibol) listings still exist, but the
person who has it does not have the base instruction set microcode.

The WCS development software for the 11/03 apparently included the
CIS microcode sources for that processor.

I've considered trying to extract the microcode from the 11/03 and
F11 chip sets. For the 11/03, it should be easy to dump the contents
of the MICROM chips, but there are also two PLAs in the control
chip that can't be easily extracted and affect microinstruction
sequencing, so the MICROM contents alone would be nearly worthless.
The best approach to extracting the PLAs is probably to photomicrograph
the die. I don't currently know of anywhere that I can get a high-
resolution photomicrograph on a hobby (<$500) budget.

For the F11 chip set, things are more complicated, because the microcode
is distributed among multiple control chips, each of which has its own
microsequencer. It should be possible to extract the portions of
microcode that are transferred on the bus to the data path chip, but
the parts of the microcode word used for microsequencer control are
not available externally to the control chip. Again, a photomicrograph
may be the best way to dump them.

Peter Monta has provided an existence proof of the feasibility of optically
dumping microcode from 1970s-era chips with masked ROM:

http://www.pmonta.com/calculators/hp-35/index.html

He dumped the HP-35 calculator ROMs optically, and go them running
on a simulator that I'd previously written for the HP-45 and HP-55.

Eric Smith

unread,
Jan 5, 2007, 2:43:07 AM1/5/07
to
ChrisQuayle <nos...@devnul.co.uk> writes:
> Yes it was, but probably the best that could be done at the time in
> commodity devices - ok, bit slice in intent ?. I think the more
> complex AMD 29xx series came quite a bit later than 1972.

Sure, but there were bit slice components before the Am2900 series.
The earliest I'm aware of were from Fairchild in 1968-1969.

Year Vendor P/N description
---- --------- ---- ---------------------------
1968 Fairchild 3800 8-bit data path
1968 Fairchild 3804 4-bit data path
1972 National MM5750 4-bit data path
1972 National MM5751 sequencer and microcode ROM
1974 MMI 6701 4-bit data path, very similar to later Am2901
1974 MMI 6700 4-bit sequencer
1975 Intel 3002 2-bit data path
1975 Intel 3001 9-bit sequencer
1975 AMD 2901 4-bit data path
1975 AMD 2909 4-bit sequencer
1975 AMD 2911 4-bit sequencer
1976 TI 74S481 4-bit data path
1976 TI 74S482 sequencer
1977 AMD 2903 4-bit data path
1977 AMD 2910 10-bit sequencer
1977 Motorola 10800 4-bit data path, ECL
1977 Motorola 10801 sequencer, ECL

? MMI 67110 sequencer
? AMD 29203 4-bit data path
? TI 74AS888 8-bit data path
? TI 74AS890 sequencer
? TI SBP0400 4-bit data path, I2L
? AMD 29116 16-bit data path

I'm missing information on the Fairchild Macrologic bitslice parts,
which were available in both TTL and CMOS. There are probably
some others I'm not aware of.

I've seen conflicting reports over whether the Four-Phase AL1 (1970)
should be considered a bit slice design. I need to track down a copy of
the paper "Four-phase LSI logic offers new approach to computer
designer" by L. Boysel and J. Murphy from the April 1970 issue
of Computer Design.

Peter "Firefly" Lund

unread,
Jan 5, 2007, 7:49:09 AM1/5/07
to
On Thu, 4 Jan 2007, Del Cecchi wrote:

> You are going to build a VAX out of LSTTL just like back in the day?

Yes.

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 7:52:30 AM1/5/07
to
On Thu, 4 Jan 2007, Eric P. wrote:

> Now it has been 30 years, but as far as I remember all microprocessor
> bus protocols had some form of NOT_READY signal to handle slow devices.
> If you know the device is fast enough you just ground the signal.
> How is this different?

Please google.

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 8:07:02 AM1/5/07
to
On Fri, 4 Jan 2007, Eric Smith wrote:

> Microcode listings for the System/360 and System/370 seem to be really
> hard to find. Someone does have a 360/30 microcode listing, but
> unfortunately is not willing to make it publicly available.

:(


> DCJ11 chip set (two chips on a ceramic hybrid)
>
> VAX: VAX-11/780
> MicroVAX II
> CVAX
> Rigel
> Mariah
> Raven
> NVAX

I have these, except for VAX-11/780.

> of the MICROM chips, but there are also two PLAs in the control
> chip that can't be easily extracted and affect microinstruction
> sequencing, so the MICROM contents alone would be nearly worthless.

PLAs should not be too hard to extract, as long as there's no storage
involved. Oh, you mean they are embedded inside some other circuitry?

> The best approach to extracting the PLAs is probably to photomicrograph
> the die. I don't currently know of anywhere that I can get a high-
> resolution photomicrograph on a hobby (<$500) budget.

I think a cheap microscope and a camera is good enough. Perhaps just a
camera with a "macro" setting or a camera + a magnifying glass.

-Peter

ChrisQuayle

unread,
Jan 5, 2007, 9:25:31 AM1/5/07
to
Peter "Firefly" Lund wrote:

>
> Timing and race conditions are easy. Come on, race conditions, I mean,
> really. Races are not hard, okay?

It may be easy for half a dozen ttl devices, but by the time you have
hundreds of devices, you will need to be a very competent designer to
make it work reliably over component spreads and temperature. Building a
vax in ssi ttl would be herculean task, but even the 11/05 was a two
board hex width unibus set with hundreds of ssi ttl devices. It may be
better to start with a gate array, where you might have some chance of
simulating the design. If ttl, how will you wire it all up ?. Hint: I
still use wire wrap for quick "what if" style prototyping, ancient as it
is, because it's fast, easy to make changes and allows good control over
component placement, wire length etc. Prestripped wires and an electric
ww tool make it a snap. You will also need a logic analyser and scope.
Nothing too expensive, 100Mhx or better Tek *analogue* scope and 50 Mhz
hp analyser should be enough spec to get started. If you can find a
cheap digital pattern generator as well, so much the better.


>> Here we differ - compared to other architectures, x86 was hard work at
>> both hardware design and assembler level. The cleanest from Intel was
>> the pre x86 8080,
>
>
> What?!?! How many different voltages did you need to supply to it? And
> in what order did they have to turned on and off? And you are telling
> me that it was easier to interface to it than the 8086/8088?

I think we are slightly at cross purposes - the comparison was between
x86 and 68k, both the same sort of timeframe. Care to recomment ?.

> I don't know enough about TMS9900 and TMS99000 to have an opinion other
> than its design led to good interrupt response times and must have/would
> have become a pain when CPUs got faster faster than RAM chips did
> (almost all "registers" were really just a small workspace in RAM
> pointed to by a real register). Actual memory use, such as array
> indexing, was a bit slow, wasn't it? And 16-bits only?
>

Have never used them, but the ti series minis and later micros were
quite elegant in concept. The large registers set (128, iirc) was off
cpu in main memory with just an on cpu register pointer. The main
advantage being fast real time context switching by just changing the
register pointer to point to a different area of memory. Thus no
multiple register saves. The obvious disadvantage is that you have to go
out to main memory for every register access, but at the time,
instruction cycle time was probably comparable to memory access time, so
perhaps a reasonable compromise in terms of overall system performance.
I think the last of the series put ram on chip to speed register access,
but by then, it was probably too late, except for some specialised
markets like aerospace, where the micro version was, I think one of the
first micros to be rad hard approved.

>
>> compilers. To me, elegant, clean design starts at the foundations and
>> is not something that can be glued on later at higher levels...
>
>
> Really?
>
> Can't say I really agree. I think there is much to be said for the
> incremental approach. Sometimes it produces something elegant,
> sometimes it doesn't, but usually, it produces something that's useful.
>
> -Peter

Ok, but are we talking here about architectural elegance, or something
that sells ?. The two are often not synonymous, though mutually
exclusive might be a better choice of words :-)...

Chris

Peter "Firefly" Lund

unread,
Jan 5, 2007, 9:50:43 AM1/5/07
to
On Fri, 5 Jan 2007, ChrisQuayle wrote:

>> Timing and race conditions are easy. Come on, race conditions, I mean,
>> really. Races are not hard, okay?
>
> It may be easy for half a dozen ttl devices, but by the time you have
> hundreds of devices, you will need to be a very competent designer to make it
> work reliably over component spreads and temperature. Building a vax in ssi

I think the trick is to make the signal flow simple.
Like so many other things, it is mostly about complexity management.

My microarchitecture is pipelined and has very few cycles in it where
signals flow "backwards".

I fully /expect/ to get in trouble, but only with the analog side of
things.

> ttl would be herculean task, but even the 11/05 was a two board hex width
> unibus set with hundreds of ssi ttl devices.

I expect 5-7 PCBs for the VAX, one per pipeline stage. I expect the ALU
stage to use about 40 chips for a 32-bit wide data path. Yes, shifts and
floating-point will be slow. I don't care.

I am not going to implement a proper cache at first and I am not going to
implement my memory with old, small chips. Instead, I'm going to
interface to an old SIMM or DIMM PC memory stick.

The pipeline will still have to stall when the memory isn't ready so I
should be able to later implement a real cache with a memory slowed down
to speeds that were realistic in 1984.

> It may be better to start with a
> gate array, where you might have some chance of simulating the design.

Actually, I should be able to simulate it just fine with either verilog or
SimSYS or both.

I should even be able to statically verify parts of it (as in: with
proofs).

> If ttl, how will you wire it all up ?. Hint: I still use wire wrap for quick
> "what if" style prototyping, ancient as it is, because it's fast, easy to
> make changes and allows good control over component placement, wire length
> etc. Prestripped wires and an electric ww tool make it a snap.

I expect to etch photo sensitive cobber plates and then solder components
(or sockets) in.

As far as I can tell it is faster and much less error prone than
wire-wrapping.

Btw: I thought only Americans were afraid of soldering?

> You will also need a logic analyser and scope. Nothing too expensive,
> 100Mhx or better Tek *analogue* scope and 50 Mhz hp analyser should be

I bought a nice Tektronix 502 for the purpose. It's an old analog thing
with tubes from the late fifties with dual beams and sensitivity down to
the microvolt range. It only claims a bandwidth of 1MHz but presumably it
will go higher if I don't need too fine a sensitivity. It cost 300 DKr
(slightly less than 40 Euro).

As for the logic analyzer, well, I don't intend to jump straight to the
VAX pipeline, anyway, so one of the training projects is a small 8-channel
"logic analyzer" that samples at 40 MHz to a RAM chip which can later be
read out. It will require an 8051, a TTL counter chip, and a couple of
RAM chips to do interleaved writing to. I expect it to be at most 5cm x
10cm in size. It would be a good way to learn more about signal integrity
issues before I can make too big mistakes.

> enough spec to get started. If you can find a cheap digital pattern
> generator as well, so much the better.

Shouldn't be necessary with an 8051 and some TTL ;)

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 10:11:45 AM1/5/07
to
On Fri, 5 Jan 2007, ChrisQuayle wrote:

> I think we are slightly at cross purposes - the comparison was between x86
> and 68k, both the same sort of timeframe. Care to recomment ?.

Sure. First of all, they weren't /really/ in same time frame.
The 8086/8088 got there years before the 68K.

That tends to matter.

Secondly, the 8088 could easily use existing peripheral chips designed for
the 8080/8085/Z80.

That also tends to matter.

As far as I can tell, the 8086/8088 was also easier to interface with than
the 68000/68008. I'm referring both to how you hook up memory and I/O
devices, how interrupts are handled, and how the initial bootup is
handled.

That also tends to matter ;) -- but since I'm not nearly as sure about
this point I won't belabor it. I might be wrong here.

As for the programming model, I much prefer the 68000 over the 8086/8088,
if we ignore compatibility issues (which tend to matter). Even if the
68000 had those annoying alignment issues (which tend to matter).

Notice I wrote "68000" and "8086/8088".

You could consider the partitioning of registers into address registers
and data registers a mistake (I won't but some do).

By the time of the 386, the "stiffness" in the allowed register use was
almost entirely gone (at a decoding cost). I consider this a case of
adding stuff to gain elegance ;)

When you introduce MMUs and floating-point, I think the x86 wins. Yes, I
think the 286 was mostly a mistake but by the 386 those mistakes were
mostly fixed and the clumsy segmentation model could be mostly ignored.

I also think the later models of the x86 handled self-modifying code
better than the later models of the 68K. That's actually important,
sometimes.

Some of the 68K models used different exception stack frame formats than
others, without having a compatibility flag.

The x86 didn't make that mistake.

I think I could dig up both more points like these and more details but I
think I'll stop here :)

>> I don't know enough about TMS9900 and TMS99000 to have an opinion other
>> than its design led to good interrupt response times and must have/would
>> have become a pain when CPUs got faster faster than RAM chips did (almost
>> all "registers" were really just a small workspace in RAM pointed to by a
>> real register). Actual memory use, such as array indexing, was a bit
>> slow, wasn't it? And 16-bits only?
>>
>
> Have never used them, but the ti series minis and later micros were quite
> elegant in concept. The large registers set (128, iirc) was off cpu in main

I think it was only 16 for the TMS 9900.

> memory with just an on cpu register pointer. The main advantage being fast
> real time context switching by just changing the register pointer to point to
> a different area of memory. Thus no multiple register saves. The obvious
> disadvantage is that you have to go out to main memory for every register
> access, but at the time, instruction cycle time was probably comparable to
> memory access time, so perhaps a reasonable compromise in terms of overall
> system performance.

I do think it was a good idea at the time.

> Ok, but are we talking here about architectural elegance, or something that
> sells ?. The two are often not synonymous, though mutually exclusive might be
> a better choice of words :-)...

I think something that sells almost always has what could be called
"elegance" but only if the entire system is considered. What is usually
called "elegant" is only elegant if one only considers a small part of the
whole. One might consider the ARM's approach to constants elegant (emit
instructions to generate them) but when one has to define an object file
format and linkers/loaders/assemblers/compilers to handle that, the
perceived elegance tends to diminish ;)

-Peter

drhowarddrfine

unread,
Jan 5, 2007, 10:11:55 AM1/5/07
to
I've threatened to do something similar for many years. Cool. Good
luck to you.

Peter "Firefly" Lund

unread,
Jan 5, 2007, 10:23:42 AM1/5/07
to
On Fri, 5 Jan 2007, drhowarddrfine wrote:

> I've threatened to do something similar for many years. Cool. Good luck to
> you.

Thank you :)

What does your microarchitecture look like?

-Peter

Eric P.

unread,
Jan 5, 2007, 11:26:49 AM1/5/07
to

I did. The first few paragraphs of the newsletter you suggested say
http://linux.monroeccc.edu/~paulrsm/dg/dg01.htm
"The 68000, when writing data, places the data on the bus and
waits for an acknowledgement that the data has been received.
This is accomplished by placing a low logic level on pin 10,
DTACK (DaTa ACKnowledged). A read is performed by requesting data
from a device and waiting until a low logic level is placed on
pin 10 which acknowledges that the data is now available on the bus."

The remainder goes on about BERR (bus error) signal, which is
a different issue than DTACK and the 'asynchronous protocol'.
(while these may interact, that is a different issue as address
or data parity errors can also trigger BERR if desired).

As far as I can see, this 'asynchronous protocol' (their words)
is the same handshake as all other bus protocols.

(I used the quotes because usually the DTACK or NOT_READY or whatever
signal is sampled by the cpu on each clock edge after the transaction
is begun. So strictly speaking it is not 'asynchronous').

Eric

Torben Ægidius Mogensen

unread,
Jan 5, 2007, 11:30:20 AM1/5/07
to
"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:

> As for the programming model, I much prefer the 68000 over the
> 8086/8088, if we ignore compatibility issues (which tend to matter).
> Even if the 68000 had those annoying alignment issues (which tend to
> matter).
>
> Notice I wrote "68000" and "8086/8088".
>
> You could consider the partitioning of registers into address
> registers and data registers a mistake (I won't but some do).

I won't either. There are several advantages in splitting the
register file:

- You can have twice as many registers with the same number of
instruction bits, as which set you use will be implicitly given by
the opcode. This is somewhat offset by the need to duplicate some
instructions for both types of registers, though.

- Without multi-porting the register file(s), you can read/write two
registers at the same time (as long as they are of different kinds).

Torben

Eric P.

unread,
Jan 5, 2007, 12:00:21 PM1/5/07
to

Ok I see what they are griping about. It has to do with
non-existent devices and the signal sense.

If the addressed device/ram is missing, a protocol that requires
an ack to proceed (DTACK) will hang the bus whereas a protocol that
requires an ack to hold (NOT_READY) will not hang.
The former case _must_ be dealt with in every design,
the later case you read junk data but the cpu doesn't hang.

Eric

drhowarddrfine

unread,
Jan 5, 2007, 12:05:58 PM1/5/07
to
I never carried it through. I was putting together a bit-slice system
using ttl and, later, 2901s.

I did a lot of work with the 68000 with some computers I designed from
scratch for work, back in the day when PLAs were fairly new.

Peter "Firefly" Lund

unread,
Jan 5, 2007, 12:26:33 PM1/5/07
to
On Fri, 5 Jan 2007, drhowarddrfine wrote:

> I never carried it through. I was putting together a bit-slice system using
> ttl and, later, 2901s.

But what was your planned VAX microarchitecture going to look like?

My intention is to prove that the VAX could be pipelined and could be made
fast -- and maybe try out a few microarchitectural tricks along the way.

My thesis is that the VAX could have been made fast relatively easily and
that the Alpha, however much we loved it, was a mistake.

A secondary point is that the VAX family could have been more
"self-compatible", i.e. that it wasn't necessary to remove instructions to
the degree it was done. I am not so sure about this point, though.

-Peter

Eric P.

unread,
Jan 5, 2007, 12:34:17 PM1/5/07
to
"Eric P." wrote:
>
> Ok I see what they are griping about. It has to do with
> non-existent devices and the signal sense.
>
> If the addressed device/ram is missing, a protocol that requires
> an ack to proceed (DTACK) will hang the bus whereas a protocol that
> requires an ack to hold (NOT_READY) will not hang.
> The former case _must_ be dealt with in every design,
> the later case you read junk data but the cpu doesn't hang.
>
> Eric

Hmmm... I looked at some of the above page again, and after a bit
more thought this this looks like just a marketing burb to sell
their brand of 68000 board and a red herring issue.

My statement about which case must be dealt with was in error.
In both cases you must ensure the signal is proper in a proper
state when a non-existent device is addressed and not allowed to
float tri-state to arbitrary logic levels. IIRC the solution was
to tie the line to either +5 (DTACK) or ground (NOT_READY)
through a 100K resistor. No timers were required to prevent cpu
hanging, only if you wanted a trap generated for bad addresses.

Eric

drhowarddrfine

unread,
Jan 5, 2007, 12:39:16 PM1/5/07
to
I'm sorry I confused you. I did not try and recreate the VAX but was
doing an architecture of my own specific to an application.

drhowarddrfine

unread,
Jan 5, 2007, 12:43:30 PM1/5/07
to

Yes, that is correct. We would have pullups/downs because tri-state was
not allowed. The value depended on the length of the bus and impedance.
I want to say the value was more like 1Meg but 100K sounds right, too.
I just don't remember.

iirc, I had to incorporate an outboard timer to timeout invalid accesses.

ChrisQuayle

unread,
Jan 5, 2007, 1:08:17 PM1/5/07
to
Peter "Firefly" Lund wrote:

>
> Sure. First of all, they weren't /really/ in same time frame.
> The 8086/8088 got there years before the 68K.
>
> That tends to matter.

I wasn't quite sure about some of the dates, so google for a timeline gets:

8080 and 6800 : 1974
6502: 1975

8086: 1978
68k: 1979

So not much time slip between them.

>
> Secondly, the 8088 could easily use existing peripheral chips designed
> for the 8080/8085/Z80.

So could the 68k, having 3 pins, enable, vma and vpa pin dedicated to
legacy 6800 series peripheral control.

> As far as I can tell, the 8086/8088 was also easier to interface with
> than the 68000/68008. I'm referring both to how you hook up memory and
> I/O devices, how interrupts are handled, and how the initial bootup is
> handled.
>
> That also tends to matter ;) -- but since I'm not nearly as sure about
> this point I won't belabor it. I might be wrong here.

I guess it depends on how much work you have done with a given cpu, as
there is always a learning curve. My x86 hw design is limited, but
around '83 I designed memory cards using the then new 64k x 1 bit
devices. A 128k card for the 6502 Apple II and a 384k card for the 8086
Sirius (II ?). Still have the Intel iAPX86 users manual somewhere, but
remember it being hard work to build any simple mental model for what
was required for the design. There were pages that seemed to contradict
themselves in terms of timing or other info and if not impenetrable, was
quite dense. Bought a 68k eval board later and it seemed like orders of
magnitude improvement over anything seen previously. Generous register
set and addressing modes, 24 bit flat address space, asynchronous bus to
allow a mixture of memory and peripheral access times, multimode, fully
vectored interrupts, the list goes on and on. At the time, the
architecture was like a wish list in silicon and if not perfect, quite
revolutionary. I think what i'm trying to say is that the 68k was one of
the first micros to break the old mold mini model of accumulator + index
register architecture that was so pervasive in early micro designs.

>
> As for the programming model, I much prefer the 68000 over the
> 8086/8088, if we ignore compatibility issues (which tend to matter).
> Even if the 68000 had those annoying alignment issues (which tend to
> matter).
>
> Notice I wrote "68000" and "8086/8088".
>

Agreed, a delight to program at assembler level.

> Some of the 68K models used different exception stack frame formats than
> others, without having a compatibility flag.

But that doesn't matter, as 68k exceptions are fully vectored, so you
always know what generated the interrupt and shouldn't need to poll
registers or poke around in the entrails. Or am I missing something ?.
Ok, perhaps for debugging.

Last time I looked, even the arm cores, current darlings of the embedded
telephony and handheld device market, only had a basic 2 level interrupt
structure. Should imagine quite helpfull for real time multi
interrupting device work ^)...

Chris


ChrisQuayle

unread,
Jan 5, 2007, 1:13:25 PM1/5/07
to
drhowarddrfine wrote:

>
> Yes, that is correct. We would have pullups/downs because tri-state was
> not allowed. The value depended on the length of the bus and impedance.
> I want to say the value was more like 1Meg but 100K sounds right, too.
> I just don't remember.
>
> iirc, I had to incorporate an outboard timer to timeout invalid accesses.

I think the major difference is between an synchronous bus and
asynchronous bus. In the first instance, the cpu just assumes that the
peripheral is always ready to do a transfer and strobes the data
blindly, with no ack from the external device. An asynchronous bus also
strobes the data, but then waits to get an acknowledge from the device
logic once the transfer is complete.

The first model is more robust and is easier to recover from, as you can
set a timer to generate an exception on a failed ack, but the
synchronous model might be faster overall...

Chris

drhowarddrfine

unread,
Jan 5, 2007, 1:55:02 PM1/5/07
to
ChrisQuayle wrote:

>
> The first model is more robust and is easier to recover from, as you can
> set a timer to generate an exception on a failed ack, but the
> synchronous model might be faster overall...
>
> Chris
>

I believe this is correct

drhowarddrfine

unread,
Jan 5, 2007, 2:01:27 PM1/5/07
to
ChrisQuayle wrote:
>
>> As far as I can tell, the 8086/8088 was also easier to interface with
>> than the 68000/68008. I'm referring both to how you hook up memory
>> and I/O devices, how interrupts are handled, and how the initial
>> bootup is handled.
>>
>> That also tends to matter ;) -- but since I'm not nearly as sure about
>> this point I won't belabor it. I might be wrong here.
Again, iirc, and it's been a long time, I seem to remember it being much
easier to design with the 68K. The 8086/88 needed a number of outboard
chips and there was some bizarre timing involved but the 68k did not
need outboard chips.

My memory may fail me here because I so soundly rejected the 8086 on
that and the programming model, as mentioned below, that I never paid it
any attention since.

Peter "Firefly" Lund

unread,
Jan 5, 2007, 3:56:39 PM1/5/07
to
On Fri, 5 Jan 2007, ChrisQuayle wrote:

>> Some of the 68K models used different exception stack frame formats than
>> others, without having a compatibility flag.
>
> But that doesn't matter, as 68k exceptions are fully vectored, so you always

It certainly does. It means you have to upgrade your operating
system or install a funny "extension" to do the job instead of the
operating system if you want to use the newer CPU.

> Last time I looked, even the arm cores, current darlings of the embedded
> telephony and handheld device market, only had a basic 2 level interrupt
> structure. Should imagine quite helpfull for real time multi interrupting
> device work ^)...

I don't think having many prioritized interrupt levels really matters all
that much. Vectors probably do but not priorities.

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 4:00:31 PM1/5/07
to
On Fri, 5 Jan 2007, Eric P. wrote:

> Hmmm... I looked at some of the above page again, and after a bit
> more thought this this looks like just a marketing burb to sell
> their brand of 68000 board and a red herring issue.

Marketing, yes. Red herring? I don't believe so.

To give you a better answer I'll have to dig around a bit -- the six 68K
data books I have do not cover the original 68K very well.

Give me a timeout of about three days :)

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 4:46:06 PM1/5/07
to
On Fri, 5 Jan 2007, ChrisQuayle wrote:

>> Sure. First of all, they weren't /really/ in same time frame.
>> The 8086/8088 got there years before the 68K.
>>
>> That tends to matter.
>
> I wasn't quite sure about some of the dates, so google for a timeline gets:
>
> 8080 and 6800 : 1974
> 6502: 1975
>
> 8086: 1978
> 68k: 1979
>
> So not much time slip between them.

As far as I know it was a bit worse than that.

The 68K was introduced in September, 1979 and the 8086 on June 8, 1978.
I haven't been able to dig up any info on early availability of the 8086
but the 68K seems to have had problems.

This is an early engineering sample of the 68000, serial #807,
manufactured in October '79:
http://www.cpu-world.com/CPUs/68000/Motorola-XC68000L%20(SN807).html

That was the first, ceramic version.

As far as I know, it took them ages to get to a plastic packaged version
(which was cheaper) and to get a version out with an 8-bit bus (which was
also cheaper). And when they did get out the 68008, they initially sold
it at the same price as the 68000.

-Peter

Peter "Firefly" Lund

unread,
Jan 5, 2007, 5:18:28 PM1/5/07
to
On Fri, 5 Jan 2007, ChrisQuayle wrote:

> I guess it depends on how much work you have done with a given cpu, as there

Yes, familiarity matters, too :)

> the 6502 Apple II and a 384k card for the 8086 Sirius (II ?). Still have the
> Intel iAPX86 users manual somewhere, but remember it being hard work to build
> any simple mental model for what was required for the design.

Why?

It should have been easier than the bank switching the 6502 forced you
to?!?

> revolutionary. I think what i'm trying to say is that the 68k was one of the
> first micros to break the old mold mini model of accumulator + index register
> architecture that was so pervasive in early micro designs.

Yep. I like the 68000's addressing modes more than the 8086/8088's but
I think the 386 (and 68020) had better addressing modes than both.

The 386 could add together the value of an immediate displacement
(8/16/32-bits with sign-extension), the value of a register, and then the
value of another register scaled by 1/2/4/8 and use the result as an
address. I know the 68020 got the same scaling capability a year earlier
:) -- I'm just saying that sometimes elegance can be achieved by adding
something.

I don't think the pre-decrement/post-increment stuff was terribly
important, though, and I've hardly missed it on other CPU's. Well, except
for the 8051. It would have been nice with autoincrement and a second
data pointer register pair.

-Peter

ken...@cix.compulink.co.uk

unread,
Jan 5, 2007, 5:43:43 PM1/5/07
to
In article
<Pine.LNX.4.61.07...@ask.diku.dk>,
fir...@diku.dk ( Lund) wrote:

> Some say it didn't work so well but what do I know, I
> never had one.

I used to know people who had programmed for one,
apparently it worked technically but the memory layout was
real pain for programmers. Video memory was split up into
256 byte chunks scattered all over the address space IIRC

> I don't know enough about TMS9900 and TMS99000 to have
> an opinion other than its design led to good interrupt
> response times

Never used any machine based on those chips but reports
are that it was slower than contemporary 8 bit machines.
Later chip had 256 bytes of on chip memory for Register
space. Addressing was IIRC limited to 16 bits anyway.

Off course the main limit for people who were paying
for home machines was price not performance. One of the
main cost items in the 1980s was RAM. From memory RAM cost
about twice per kilobyte then than it does a megabyte now
and the difference could well have been higher. I remember
paying about 380 sterling for a Video Geni, 16K of memory,
cassette tape recorder for external storage and you had to
use a TV for the display.

Ken Young

Eric Smith

unread,
Jan 6, 2007, 5:34:16 PM1/6/07
to
"Peter \"Firefly\" Lund" <fir...@diku.dk> writes:
> My intention is to prove that the VAX could be pipelined and could be
> made fast

In what timeframe? It was pipelined and made fast in its later years.
But it was getting harder to squeeze out more performance. DEC could
probably have kept going down that path if they had wanted to (and been
able to) spend as much money on it as Intel spent developing the
Pentium and followons. Instead, they reasoned (correctly, IMHO) that
they could get better raw performance by building a "speed demon"
style RISC processor. And that it would be easier to take that
architecture down the "braniac" path later than it would be to do
so to the VAX.

> My thesis is that the VAX could have been made fast relatively easily
> and that the Alpha, however much we loved it, was a mistake.

It might be "relatively easy" today to take a late-1980s VAX design and
make it faster, but would it really have been "relatively easy" to do
that in 1989? I'm skeptical.

Eric

Eric Smith

unread,
Jan 6, 2007, 5:38:50 PM1/6/07
to
I wrote:
> of the MICROM chips, but there are also two PLAs in the control
> chip that can't be easily extracted and affect microinstruction
> sequencing, so the MICROM contents alone would be nearly worthless.

"Peter \"Firefly\" Lund" wrote:
> PLAs should not be too hard to extract, as long as there's no storage
> involved. Oh, you mean they are embedded inside some other circuitry?

Yes, there are two levels of PLAs (i.e., two AND arrays and to OR arrays)
which take inputs from several internal registers, and whose outputs
affect the operation of the microsequencer.

> I think a cheap microscope and a camera is good enough.

Tried that. Need a _good_ microscope and and a camera.

> Perhaps just a camera with a "macro" setting

Tried that too, with a 4 megapixel camera. Seems to be at least two
orders of magnitude off.

> or a camera + a magnifying glass.

Haven't tried that.

Peter "Firefly" Lund

unread,
Jan 6, 2007, 11:46:08 PM1/6/07
to
On Sat, 6 Jan 2007, Eric Smith wrote:

> Yes, there are two levels of PLAs (i.e., two AND arrays and to OR arrays)
> which take inputs from several internal registers, and whose outputs
> affect the operation of the microsequencer.

Nasty.

-Peter

drhowarddrfine

unread,
Jan 6, 2007, 11:58:49 PM1/6/07
to
Eric Smith wrote:

>> Perhaps just a camera with a "macro" setting
>
> Tried that too, with a 4 megapixel camera. Seems to be at least two
> orders of magnitude off.
>
>> or a camera + a magnifying glass.
>
> Haven't tried that.

Use film. It has higher resolution than a digital camera.

ChrisQuayle

unread,
Jan 7, 2007, 2:08:25 PM1/7/07
to
Peter "Firefly" Lund wrote:

>
> It certainly does. It means you have to upgrade your operating system
> or install a funny "extension" to do the job instead of the operating
> system if you want to use the newer CPU.

I think we must be at cross purposes here. On exception, at least the
stack pointer and current cpu status must be pushed onto the stack and
these values are popped in reverse order on exit from the handler. Note
that this is all done transparently from the software point of view, as
it's a hardware function. The order in which this is done is important
to the degree that the program counter gets pushed first and popped
last, but neither the application nor the os exception handler need to
know anything about it. So why should 'extensions' be needed, or am I
missing something ?.

> I don't think having many prioritized interrupt levels really matters

> all that much. Vectors probably do but not priorities.

Well, in real time systems at least, interrupts from devices like
scheduling clocks need to have priority over more mundane tasks like
serial comms. Therefore, you prioritise interrupts to guarantee service
for critical devices, irrespective of what other device is currently
interrupting the system. In non real time systems, you often have real
time requirements anyway - low interrupt latency for hard drive and
network comms interfaces vs loose reuirements for keyboard interfaces,
serial comms etc. Even the original pc used a prioritised interrupt
structure, tagging on the 8259 device, which can't have been low cost at
the time. Even older cpu's like the original 6502 had an implied
priority structure: reset, highest, non maskable interrupt, then irq at
the lowest priority...

Chris

ChrisQuayle

unread,
Jan 7, 2007, 2:42:35 PM1/7/07
to
Peter "Firefly" Lund wrote:

>> So not much time slip between them.
>
>
> As far as I know it was a bit worse than that.
>
> The 68K was introduced in September, 1979 and the 8086 on June 8, 1978.
> I haven't been able to dig up any info on early availability of the 8086
> but the 68K seems to have had problems.
>

If you do a bit more digging, you'l find that the original research that
led to the 68k was started in 1975, so it's very much in the same time
frame...

Chris

ChrisQuayle

unread,
Jan 7, 2007, 3:39:08 PM1/7/07
to

It's the difference between casual programming, where you never get
enough experience to be really fluent and full immersion over say a year
or two, where you end up using nearly all the capability at some time or
another. I would miss pre and post dec/inc operators, as they can save a
lot of instructions whan accessing arrays, depending on how you have
structured the code. Perhaps not so important now, but some systems were
very memory constrained for the tasks they were expected to do and a
main goal was to write efficient code, while still maintaining structure
and readability.

As an example, see if you can work out what's happening here and why:

tst (pc)+
10$ sec
rts pc

This is typical of the idiom and saves memory. By the time you have a
couple of hundred modules, the overall saving is quite significant.

Now of course, all is surfaces, and uSoft Studio rulz ok :-)...

Chris

ChrisQuayle

unread,
Jan 7, 2007, 4:22:11 PM1/7/07
to
ChrisQuayle wrote:
> Peter "Firefly" Lund wrote:

>
> I think we must be at cross purposes here. On exception, at least the
> stack pointer

^^^^^^^^^^^^^
Sorry, brain fade. Should be program counter...

Chris

Peter "Firefly" Lund

unread,
Jan 7, 2007, 11:51:30 PM1/7/07
to
On Sun, 7 Jan 2007, ChrisQuayle wrote:

>> It certainly does. It means you have to upgrade your operating system or
>> install a funny "extension" to do the job instead of the operating system
>> if you want to use the newer CPU.
>
> I think we must be at cross purposes here. On exception, at least the stack
> pointer and current cpu status must be pushed onto the stack and these values
> are popped in reverse order on exit from the handler. Note that this is all
> done transparently from the software point of view, as it's a hardware
> function. The order in which this is done is important to the degree that the
> program counter gets pushed first and popped last, but neither the
> application nor the os exception handler need to know anything about it. So
> why should 'extensions' be needed, or am I missing something ?.

What if the system software needs to examine the exception stack frame and
it is different from what is expected?

This is not hypothetical.

It was one of the reasons why it was difficult to put a 68060 in a Mac.
The relevant exception handling and occasional faking of an old-style
exception stack frame + forwarding to the os handler was taken care of by
some system software that came with the (third-party) CPU board.

In this case, this kind of software was called an "extension".

>> I don't think having many prioritized interrupt levels really matters all
>> that much. Vectors probably do but not priorities.
>
> Well, in real time systems at least, interrupts from devices like scheduling
> clocks need to have priority over more mundane tasks like serial comms.

One can get surprisingly far by just being able to block interrupts
individually.

After taking an interrupt, one blocks that particular kind of interrupt
and sets a flag. The priority handling can be handled in software,
including starvation prevention, reshuffling of priorities, and permanent
disabling of naughty interrupt sources.

> Therefore, you prioritise interrupts to guarantee service for critical
> devices, irrespective of what other device is currently interrupting the

What is critical enough to need that on a non-embedded machine?

> original pc used a prioritised interrupt structure, tagging on the 8259
> device, which can't have been low cost at the time. Even older cpu's like the

But look at how screwed up the priorities were (or at any rate, got, by
the time of the AT when the second interrupt controller was introduced).

You could rotate the priorities (and even run the interrupt controller in
a mode where it automatically rotated them) but that was all.

If you really needed real-time -- and had more than one interesting
interrupt source -- you something like what I sketched above and handled
the prioritization in software.

> original 6502 had an implied priority structure: reset, highest, non maskable
> interrupt, then irq at the lowest priority...

Sure, something rudimentary like that, fine, yes, that's useful.

-Peter

Peter "Firefly" Lund

unread,
Jan 7, 2007, 11:54:19 PM1/7/07
to
On Sun, 7 Jan 2007, ChrisQuayle wrote:

> If you do a bit more digging, you'l find that the original research that led
> to the 68k was started in 1975, so it's very much in the same time frame...

That, I know.

As far as I know, the project was late, used many transistors, were not
compatible with anything, and was expensive in the beginning.

The 8086/8088 got there first, was smaller, was compatible, and was
cheaper ;)

Also, it seems to have been a backup project in case anything happened to
the 432, the flagship clean-sheet design.

-Peter

Peter "Firefly" Lund

unread,
Jan 8, 2007, 12:14:50 AM1/8/07
to
On Sun, 7 Jan 2007, ChrisQuayle wrote:

> would miss pre and post dec/inc operators, as they can save a lot of
> instructions whan accessing arrays, depending on how you have structured the
> code.

These days, instructions are free but loads and stores are not ;)

> As an example, see if you can work out what's happening here and why:
>
> tst (pc)+
> 10$ sec
> rts pc

its an epilogue that returns to the address on the top of the stack with
these flags:

N = 0
Z = 0
V = 0
C = 0

"tst (pc)+" ; after loading the instruction, pc points to the word after
tst, namely the "sec" instruction. The operand that gets loaded is 000261
octal/00B1 hex. The tst instruction subtracts 0 from the operand value in
order to compare it to zero and sets the flags accordingly. V and C
always get set to 0 by that instruction, Z gets set to 0 because 000261 is
not equal to 0 and N gets set to 0 because 000261 is not less than 0.

As part of the loading of the operand, pc gets incremented. tst loads a
word (whereas tstb would load a byte) but that is not important in the
case where pc is used; pc is always incremented/decremented by two when
used in an addressing mode, no matter the operand size.

I think "10$" is a temporary label of sorts -- I know very little about
the PDP-11 assembler(s?).

The sec instruction will be skipped because it was used as an operand.

"rts pc" will do what a normal return does in architectures that always
save return addresses on the stack. PDP-11 wasn't one of those, but using
"pc" as the register holding the return address simulates it. This
instruction doesn't change the flags.

My guess is that sometimes the epilogue would be entered at the label
"10$", in which case it will return with C = 1 (and the other flags
untouched by this code).

-Peter

Peter "Firefly" Lund

unread,
Jan 8, 2007, 12:23:58 AM1/8/07
to
On Sun, 7 Jan 2007, ChrisQuayle wrote:

> This is typical of the idiom and saves memory. By the time you have a couple
> of hundred modules, the overall saving is quite significant.
>
> Now of course, all is surfaces, and uSoft Studio rulz ok :-)...

Try this:

ftp://ftp.scene.org/pub/mirrors/hornet/code/effects/stars/mwstar.zip

-Peter

Terje Mathisen

unread,
Jan 8, 2007, 9:24:30 AM1/8/07
to

Nice.

The guy who wrote this entered Caltech a few months later, and then
started working for Goldman Sachs after graduation, presumably at quite
a bit above my own starting salary. :-)

Good for him!

Terje

--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"

ChrisQuayle

unread,
Jan 8, 2007, 1:35:10 PM1/8/07
to
Peter "Firefly" Lund wrote:

> It was one of the reasons why it was difficult to put a 68060 in a Mac.
> The relevant exception handling and occasional faking of an old-style
> exception stack frame + forwarding to the os handler was taken care of
> by some system software that came with the (third-party) CPU board.
>
> In this case, this kind of software was called an "extension".

Ok fair comment, but you can't expect a system manufacturer like Apple
to even attempt to cater for a 3rd party add-on vendor. As far as I can
see, Apple in particular were well known for minimising hardware count
in their machines and probably cut corners and made use of tricky stuff
at system software level as well. The fact that a 3rd party vendor had
trouble adding a much more modern cpu to an earlier well locked down
design is no surprise really.

So, while it's an example, it is a bit esoteric and doesn't invalidate
the point I was trying to make.

>
> One can get surprisingly far by just being able to block interrupts
> individually.

Well, we do that as well and it's pretty much essential, but that's not
the same mechanism, nor due to the same requirement as having a hardware
based interrupt priority structure. Different animals.

Chris

Peter "Firefly" Lund

unread,
Jan 8, 2007, 2:51:12 PM1/8/07
to
On Mon, 8 Jan 2007, Terje Mathisen wrote:

> The guy who wrote this entered Caltech a few months later, and then started
> working for Goldman Sachs after graduation, presumably at quite a bit above
> my own starting salary. :-)
>
> Good for him!

Yes :)

The wildest thing is that it was actually possible to improve it a bit!

Instead of using INT 16h to test for a keypress, one could use IN AL, 60h
followed by RCR AL,1 and a JC/JNC. It was also possible to write some of
the interleaved loop stuff such that at least one byte was shared between
instructions, in other words, such as the lower jump jumps into the middle
of an instruction such that the remainder of the instruction just happens
to be another useful instruction.

Instead of using MOV BH,A0 followed by MOV ES, BX, one could use a POP ES,
I think it was. The top of memory was available at a field inside the PSP
and as far as I recall, it was easy to get it into ES. The top of memory
wouldn't always be A000h but it would almost always be close enough for
the visuals to work just fine.

-Peter

Peter "Firefly" Lund

unread,
Jan 8, 2007, 2:55:54 PM1/8/07
to
On Mon, 8 Jan 2007, ChrisQuayle wrote:

> Ok fair comment, but you can't expect a system manufacturer like Apple to
> even attempt to cater for a 3rd party add-on vendor. As far as I can see,

No, but I can expect Motorola to make their CPUs more backwards
compatible.

Compatibility snags was a contributing factor to why Apple was a bit slow
in introducing new CPUs for the old Macs.

> Well, we do that as well and it's pretty much essential, but that's not the
> same mechanism, nor due to the same requirement as having a hardware based
> interrupt priority structure. Different animals.

Where do you need hardware-based interrupt priority handling (outside of
NMI) that can't be done as well by software + hardware-based interrupt
blocking?

-Peter

Niels Jørgen Kruse

unread,
Jan 8, 2007, 4:30:34 PM1/8/07
to
Peter "Firefly" Lund <fir...@diku.dk> wrote:

> On Sun, 7 Jan 2007, ChrisQuayle wrote:
>
> > would miss pre and post dec/inc operators, as they can save a lot of
> > instructions whan accessing arrays, depending on how you have structured the
> > code.
>
> These days, instructions are free but loads and stores are not ;)

Slots in the reorder window are not free.

--
Mvh./Regards, Niels Jørgen Kruse, Vanløse, Denmark

John Ahlstrom

unread,
Jan 8, 2007, 4:31:25 PM1/8/07
to

Certainly one of the most important differences in
the 8086 and 68000 was the idea that the 68000 would
be the first of a long-lived architecture of high-end
microprocessors, while the ideas was that the 8086
would be a quick-to-market space holder till the
real long-lived, high-performance 8800 (8816, 432)
arrived.

Different requirements produced different architectures
and implementations.

JKA

ChrisQuayle

unread,
Jan 8, 2007, 5:07:59 PM1/8/07
to
Peter "Firefly" Lund wrote:
> On Sun, 7 Jan 2007, ChrisQuayle wrote:
>
>> would miss pre and post dec/inc operators, as they can save a lot of
>> instructions whan accessing arrays, depending on how you have
>> structured the code.
>
>
> These days, instructions are free but loads and stores are not ;)
>

Instructions are free ?.

Pre or post inc/dec addressing modes typically operate on a machine
register, so no additional load or store from memory is involved. That
is, the pointer is adjusted, not what it points to.

> My guess is that sometimes the epilogue would be entered at the label
> "10$", in which case it will return with C = 1 (and the other flags
> untouched by this code).

Correct. Such constructs were widely used to save instructions / memory
and associated access time. It's basically an economic way of providing
a normal or error return condition for a function. An example of how pre
or post inc/dec addressing modes can be usefull and save a load / store
or two. Note that this is not self modifying code or any other tricky
programming idiom. Nor does it detract from readabilty or program flow.

Chris

ChrisQuayle

unread,
Jan 8, 2007, 5:08:12 PM1/8/07
to

Very cool - Have neither the fluency with pc architecture, nor x86 asm
to produce something like that :-)...

Chris

ChrisQuayle

unread,
Jan 8, 2007, 6:03:20 PM1/8/07
to
Peter "Firefly" Lund wrote:

>
> No, but I can expect Motorola to make their CPUs more backwards compatible.
>

But are they backwards compatable in terms of instruction execution
stream ?. The exception stack frame format is irrelevant to well written
system software and applications. Why should Motorola guarantee
compatabilty at that level just so some 3rd party vendor can hack a more
powerfull cpu on to an old machine ?.

Just what exactly was it about the stack frame format that caused the
trouble ?. I don't have a 68060 manual here, so a full explanation is in
order, yes ?.

> Compatibility snags was a contributing factor to why Apple was a bit
> slow in introducing new CPUs for the old Macs.
>

A cynic might suggest that it was because Apple used undocumented
features of the cpu or non data sheet hardware operation to get the job
done, in order to save a chip or two.

>
> Where do you need hardware-based interrupt priority handling (outside of
> NMI) that can't be done as well by software + hardware-based interrupt
> blocking?

Ok, let's compare two methods:

With a single level interrupt structure and >1 interrupting devices, you
effectively set the interrupt priority by the order in which device
status registers are polled in software, perhaps polling some or all of
those you are not interested in as well, to determine the interrupt
source. The problem with this is that higher priority devices are denied
access while the lower priority device is being serviced. While you may
be able to re-enable interrupts within the handler to allow a higher
priority device to get service once some initial work is done, the
software overhead to make this work properly can be quite significant.
This sort of defeats the object of having a fast access interrupt
structure in the first place. In engineering terms, it's dog's breakfast
of a solution, though it is cheap.

Ok, your turn - describe a hardware prioritised interrupt structure. to
fill in the second method...

Chris


Erik Trulsson

unread,
Jan 9, 2007, 4:29:26 AM1/9/07
to
ChrisQuayle <nos...@devnul.co.uk> wrote:
> Peter "Firefly" Lund wrote:
>
>>
>> No, but I can expect Motorola to make their CPUs more backwards compatible.
>>
> But are they backwards compatable in terms of instruction execution
> stream ?. The exception stack frame format is irrelevant to well written
> system software and applications.

Applications would normally not need to know anything about the exception stack
frame format, but system software is another matter.

If the system wants to actually *handle* the exception properly then it will
need to know the format of the stack frame in order to find out things like
*which* instruction caused the exception, or which memory access caused a page
fault.


> Why should Motorola guarantee
> compatabilty at that level just so some 3rd party vendor can hack a more
> powerfull cpu on to an old machine ?.

Not just for that, but also because compatibility at that level could allow
users to run older system software on newer systems.

>
> Just what exactly was it about the stack frame format that caused the
> trouble ?. I don't have a 68060 manual here, so a full explanation is in
> order, yes ?.

The problem was that the stack frame format *changed*. That meant that
you had to get an updated (or at least patched) OS in order to run on
the new CPUs. This was not just a problem for the Mac, but for all other
systems that used th 68k series as well.

>
>> Compatibility snags was a contributing factor to why Apple was a bit
>> slow in introducing new CPUs for the old Macs.
>>
>
> A cynic might suggest that it was because Apple used undocumented
> features of the cpu or non data sheet hardware operation to get the job
> done, in order to save a chip or two.

That cynic would probably be wrong. It was rather the case that Apple depended
on documented features of the earlier models of the 68k series that were
different in the later models.

--
<Insert your favourite quote here.>
Erik Trulsson
ertr...@student.uu.se

Terje Mathisen

unread,
Jan 9, 2007, 6:49:11 AM1/9/07
to

I've studied the 20-byte version as well, it is available at a
'Programming Gems' website (along with a few snippets of my own code
:-), I really don't like most of the "optimizations" used to get down to 20:

It uses 9FFFh instead of the proper 0A000h as the segment of the video
screen, something which does work, but only due to a non-guaranteed
value on the top of the stack.

Using POP from the stack to initialize the star field crashes if there
happens be a timer (or other hw) interrupt while the stack pointer has
wrapped around into the PSP+code.

I.e. the original 24-byte was elegant, while the 20-byte version was
ugly, IMHO. :-)

The tightest code I've ever written must be either my combined
keyboard/EGA/VGA driver or maybe the code I wrote for executable MIME ascii.

The latter uses code as data and vice versa in several levels, while
handling text reformatting at various levels, and it stays within the
70+ MIME-blessed chars/bytes that are transparent across all email gateways.

Peter "Firefly" Lund

unread,
Jan 9, 2007, 10:49:35 AM1/9/07
to
On Tue, 9 Jan 2007, Terje Mathisen wrote:

> Using POP from the stack to initialize the star field crashes if there
> happens be a timer (or other hw) interrupt while the stack pointer has
> wrapped around into the PSP+code.

I didn't like that one, either :(

> I.e. the original 24-byte was elegant, while the 20-byte version was ugly,

But an interesting kind of ugly, wouldn't you say?

I still like the IN AL, 60h .. DAS .. JC/JNC idea. It doesn't work with
all keypresses but Esc is one of the keys it does work with.

-Peter

toby

unread,
Jan 9, 2007, 6:35:32 PM1/9/07
to

If you can fill a medium format frame (not sure how close a macro lens
will get) and use a suitable emulsion, you could achieve at least 400
megapixels... e.g. RB-67 bellows and 140mm macro?

ChrisQuayle

unread,
Jan 9, 2007, 11:09:03 AM1/9/07
to
Erik Trulsson wrote:

> If the system wants to actually *handle* the exception properly then it will
> need to know the format of the stack frame in order to find out things like
> *which* instruction caused the exception, or which memory access caused a page
> fault.
>

...and if Apple had designed a machine using 68060, no doubt the install
program would have probed for cpu type and loaded the appropriate
modules, or patched the os on the fly. To criticise an architectural
change because some third party add-on company found it difficult to
integrate their product is hardly relevant. It's precisely the sort of
problem one would expect to have to solve if you build such products.

> The problem was that the stack frame format *changed*. That meant that
> you had to get an updated (or at least patched) OS in order to run on
> the new CPUs. This was not just a problem for the Mac, but for all other
> systems that used th 68k series as well.

Well, nothing stays the same if we are to have progress and it's true of
many architectures. Peter's incremental approach in action :-)...

Chris

ken...@cix.compulink.co.uk

unread,
Jan 9, 2007, 11:15:48 AM1/9/07
to
In article
<Pine.LNX.4.61.07...@ask.diku.dk>,
fir...@diku.dk ( Lund) wrote:

> Sure, something rudimentary like that, fine, yes,
> that's useful.

The Z80 had a NMI and three modes of interrupts with Mode
2 and Mode 1 being vectored. If multiple interrupts were
possible at the same time they had to have priority
settled by external hardware.

Ken Young

ken...@cix.compulink.co.uk

unread,
Jan 9, 2007, 11:15:49 AM1/9/07
to
In article <Y4Aoh.13453$696....@newsfe7-win.ntli.net>,
nos...@devnul.co.uk (ChrisQuayle) wrote:

> Ok, your turn - describe a hardware prioritised
> interrupt structure. to fill in the second method...

The Z-80 specific peripheral chips had built in support
for daisy chaining, or with mode 1 and 2 interrupts you
could use external logic to decide which external chip was
allowed to place the vector data on the data bus.

I happen to have a Z80 book to hand.

Ken Young

ChrisQuayle

unread,
Jan 9, 2007, 11:58:27 AM1/9/07
to

Didn't the z80 also have fully vectored interrupts for zilog peripheral
devices, implemented as a vector register in the peripheral controller
chip itself ?...

Chris

Peter "Firefly" Lund

unread,
Jan 9, 2007, 12:26:47 PM1/9/07
to
On Tue, 9 Jan 2007, ChrisQuayle wrote:

> Erik Trulsson wrote:
>
>> If the system wants to actually *handle* the exception properly then it
>> will
>> need to know the format of the stack frame in order to find out things
>> like
>> *which* instruction caused the exception, or which memory access caused a
>> page
>> fault.
>>
>
> ...and if Apple had designed a machine using 68060, no doubt the install
> program would have probed for cpu type and loaded the appropriate modules, or
> patched the os on the fly. To criticise an architectural change because some
> third party add-on company found it difficult to integrate their product is
> hardly relevant. It's precisely the sort of problem one would expect to have
> to solve if you build such products.

I think you misunderstood us.

I am saying that /Motorola/ made 68K versions that were incompatible with
each other. This gave problems /both/ for Apple /and/ for others.

That the third-party product needed to work around those problems had
nothing to do with Apple taking short cuts.

Apple also needed to work around the incompatibilities and it took them
longer to do so (because it was somewhat difficult) than the
third-parties.

Even Apple had the problem that older system software didn't work well
with newer CPUs.

The responsibility for this lies almost entirely with Motorola.

The third-party in question was Radius (there were probably also others),
which consisted of some of the most knowledgeable people about the
Macintosh that existed on the planet at that time. One of them was
Burrell Smith, who designed the electronics for the first Macs.

>> The problem was that the stack frame format *changed*. That meant that
>> you had to get an updated (or at least patched) OS in order to run on
>> the new CPUs. This was not just a problem for the Mac, but for all other
>> systems that used th 68k series as well.
>
> Well, nothing stays the same if we are to have progress and it's true of many
> architectures. Peter's incremental approach in action :-)...

Wrong. Completely wrong.

One of the blunders from Motorola was that they didn't have a flag in the
CPU for ignoring the top 8 virtual address bits in the 68020+.

Others were changing exception stack frame layouts between the versions --
they were identified by a four-bit code but what does that help if a newer
CPU uses a new format? And it is still very bad form to expect exception
handlers to inspect the stack to find out what the stack is.

They also removed some of the instructions in the later versions (and I am
not talking about the coldfire/dragonball embedded versions). And changed
the allowed alignments for some of them.

-Peter

Terje Mathisen

unread,
Jan 9, 2007, 1:02:40 PM1/9/07
to
Peter "Firefly" Lund wrote:
> On Tue, 9 Jan 2007, Terje Mathisen wrote:
>
>> Using POP from the stack to initialize the star field crashes if there
>> happens be a timer (or other hw) interrupt while the stack pointer has
>> wrapped around into the PSP+code.
>
> I didn't like that one, either :(
>
>> I.e. the original 24-byte was elegant, while the 20-byte version was
>> ugly,
>
> But an interesting kind of ugly, wouldn't you say?

Sure, I'll give you that one.


>
> I still like the IN AL, 60h .. DAS .. JC/JNC idea. It doesn't work with
> all keypresses but Esc is one of the keys it does work with.

The smallest code doesn't check the keyboard at all, so that's a less
important optimization IMHO.

ChrisQuayle

unread,
Jan 9, 2007, 4:57:59 PM1/9/07
to
Peter "Firefly" Lund wrote:

>
> I am saying that /Motorola/ made 68K versions that were incompatible
> with each other. This gave problems /both/ for Apple /and/ for others.
>

> Apple also needed to work around the incompatibilities and it took them
> longer to do so (because it was somewhat difficult) than the third-parties.
>
> Even Apple had the problem that older system software didn't work well
> with newer CPUs.

That's Apple's problem - they should write system software that works.
Be honest, the older macs never were about robust, pro engineering
design - overpriced, crashing, underperforming marketing exercise
systems targeted at media arts wheenies and those who didn't like
computers. Too much art and not enough engineering, perhaps. A complete
contrast to the Apple II, which was an honest, unpretentious machine and
very good for its time.

> One of the blunders from Motorola was that they didn't have a flag in
> the CPU for ignoring the top 8 virtual address bits in the 68020+.
>
> Others were changing exception stack frame layouts between the versions
> -- they were identified by a four-bit code but what does that help if a
> newer CPU uses a new format? And it is still very bad form to expect
> exception handlers to inspect the stack to find out what the stack is.
>

Well, systems programming is about getting down into the internals, so
you should expect to have to do stuff like that from time to time. Just
part of the rich tapestry of design life.

> They also removed some of the instructions in the later versions (and I
> am not talking about the coldfire/dragonball embedded versions). And
> changed the allowed alignments for some of them.

I remain unconvinced on this point, so let's leave it at that and agree
to differ ?. The general debate is fun, however :-)...

Chris

Peter "Firefly" Lund

unread,
Jan 9, 2007, 5:19:47 PM1/9/07
to
On Tue, 9 Jan 2007, ChrisQuayle wrote:

> That's Apple's problem - they should write system software that works. Be
> honest, the older macs never were about robust, pro engineering design -

Yes, they were :)

> overpriced, crashing, underperforming marketing exercise systems targeted at

But they were also overpriced -- and crashing, mostly due to the
unfortunate choice of running in supervisor mode by default and due to the
fact that the system calls didn't validate their parameters.

> media arts wheenies and those who didn't like computers. Too much art and not
> enough engineering, perhaps. A complete contrast to the Apple II, which was
> an honest, unpretentious machine and very good for its time.

The first Mac and the Apple II were much alike in their creative hardware
design. Both are wonders of component optimization.

>> They also removed some of the instructions in the later versions (and I
>> am not talking about the coldfire/dragonball embedded versions). And
>> changed the allowed alignments for some of them.
>
> I remain unconvinced on this point, so let's leave it at that and agree to
> differ ?.

What are you unconvinced about? That they really changed exception stack
frame formats? Several times? That some of the instructions got stricter
alignment requirements imposed on them later in the 68K series? That they
removed some of the (user-level) instructions?

> The general debate is fun, however :-)...

Yeah :)

-Peter

ken...@cix.compulink.co.uk

unread,
Jan 10, 2007, 5:54:37 AM1/10/07
to
In article <TQPoh.80021$493....@newsfe4-gui.ntli.net>,
nos...@devnul.co.uk (ChrisQuayle) wrote:

> Didn't the z80 also have fully vectored interrupts for
> zilog peripheral devices, implemented as a vector
> register in the peripheral controller chip itself ?...

Yes, the peripheral controller had to put the vector on
the data bus, the chips also included the hardware needed
for priority determination.

Ken Young

ChrisQuayle

unread,
Jan 12, 2007, 5:34:13 PM1/12/07
to
Eric Smith wrote:
> ChrisQuayle <nos...@devnul.co.uk> writes:
>
>>Yes it was, but probably the best that could be done at the time in
>>commodity devices - ok, bit slice in intent ?. I think the more
>>complex AMD 29xx series came quite a bit later than 1972.
>
>
> Sure, but there were bit slice components before the Am2900 series.
> The earliest I'm aware of were from Fairchild in 1968-1969.
>
> Year Vendor P/N description
> ---- --------- ---- ---------------------------
> 1968 Fairchild 3800 8-bit data path
> 1968 Fairchild 3804 4-bit data path
> 1972 National MM5750 4-bit data path
> 1972 National MM5751 sequencer and microcode ROM
> 1974 MMI 6701 4-bit data path, very similar to later Am2901
> 1974 MMI 6700 4-bit sequencer
> 1975 Intel 3002 2-bit data path
> 1975 Intel 3001 9-bit sequencer
> 1975 AMD 2901 4-bit data path
> 1975 AMD 2909 4-bit sequencer
> 1975 AMD 2911 4-bit sequencer
> 1976 TI 74S481 4-bit data path
> 1976 TI 74S482 sequencer
> 1977 AMD 2903 4-bit data path
> 1977 AMD 2910 10-bit sequencer
> 1977 Motorola 10800 4-bit data path, ECL
> 1977 Motorola 10801 sequencer, ECL
>
> ? MMI 67110 sequencer
> ? AMD 29203 4-bit data path
> ? TI 74AS888 8-bit data path
> ? TI 74AS890 sequencer
> ? TI SBP0400 4-bit data path, I2L
> ? AMD 29116 16-bit data path
>
> I'm missing information on the Fairchild Macrologic bitslice parts,
> which were available in both TTL and CMOS. There are probably
> some others I'm not aware of.
>
> I've seen conflicting reports over whether the Four-Phase AL1 (1970)
> should be considered a bit slice design. I need to track down a copy of
> the paper "Four-phase LSI logic offers new approach to computer
> designer" by L. Boysel and J. Murphy from the April 1970 issue
> of Computer Design.

That's interesting. Nearly 3 decades ago, I was given a Honeywell VIP440
(?) terminal. Inside, were a load of boards, Intel 1103 dynamic ram etc
and a cpu board containing around 6 Fairchild 40 pin packages with 1969
date codes. The terminal wasn't working, but after blagging the
schematics from Honeywell uk London office, fixed the psu and away it
flew. The interesting thing was that the cpu chips had names in the
schematic like BLU, or basic logic unit. At the time, thought it was
some sort of precursor to the Fairchild F8 micro, as that was also a
multichip 40 pin package implementation.

Looked, but ever did manage to find any info on the chips, but the above
has probably solved the mystery. Do you have any other info or pointers
to the 3800/3804 series ?...

Chris

Del Cecchi

unread,
Jan 12, 2007, 9:15:09 PM1/12/07
to

"ChrisQuayle" <nos...@devnul.co.uk> wrote in message
news:F1Uph.83164$493....@newsfe4-gui.ntli.net...

Fairchild and Motorola both made a line of ECL chips, MECL1, 2, 3, 10k
and 100k


Jonathan Thornburg -- remove -animal to reply

unread,
Jan 14, 2007, 9:20:17 AM1/14/07
to
Peter "Firefly" Lund wrote:
> Timing and race conditions are easy. Come on, race conditions, I mean,
> really. Races are not hard, okay?

ChrisQuayle <nos...@devnul.co.uk> wrote:
> It may be easy for half a dozen ttl devices, but by the time you have
> hundreds of devices, you will need to be a very competent designer to
> make it work reliably over component spreads and temperature. Building a
> vax in ssi ttl would be herculean task, but even the 11/05 was a two
> board hex width unibus set with hundreds of ssi ttl devices.

Chris is ++right. Back in the "old days" even professional designers
with lots of experience sometimes missed race conditions. There's a
classic example in Tracy Kidder's "The Soul of a New Machine",
http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
in the chapter "The Case of the Missing Nand Gate". A design team at
Data General, working on what became the Data General MV/8000 (= DG's
answer to the VAX), missed a race condition... and spent two months
tracking down the resulting intermittent failure.

I suspect that even with today's simulation capability, similar bugs
still lurk in modern microprocessors. In fact, I'm sure of it -- just
look at the errata sheet for your favorite microprocessor.

ciao,

--
-- "Jonathan Thornburg -- remove -animal to reply" <jth...@aei.mpg-zebra.de>
Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut),
Golm, Germany, "Old Europe" http://www.aei.mpg.de/~jthorn/home.html
"Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral."
-- quote by Freire / poster by Oxfam

Peter "Firefly" Lund

unread,
Jan 14, 2007, 11:15:50 AM1/14/07
to
On Sun, 14 Jan 2007, Jonathan Thornburg -- remove -animal to reply wrote:

> I suspect that even with today's simulation capability, similar bugs
> still lurk in modern microprocessors. In fact, I'm sure of it -- just
> look at the errata sheet for your favorite microprocessor.

Yes, but they are a lot more complicated than what I have in mind. It's
hard to create race conditions for an in-order pipelined machine with only
one clock domain. Almost every signal flows in the same direction and
each pipeline stage can be verified in isolation.

Secondly, this is something that actually can be verified and proved.

-Peter

jacko

unread,
Jan 14, 2007, 5:26:15 PM1/14/07
to

Peter "Firefly" Lund wrote:
> On Sun, 14 Jan 2007, Jonathan Thornburg -- remove -animal to reply wrote:
>
> > I suspect that even with today's simulation capability, similar bugs
> > still lurk in modern microprocessors. In fact, I'm sure of it -- just
> > look at the errata sheet for your favorite microprocessor.
>
> Yes, but they are a lot more complicated than what I have in mind. It's
> hard to create race conditions for an in-order pipelined machine with only
> one clock domain. Almost every signal flows in the same direction and
> each pipeline stage can be verified in isolation.

not so much a race within the machine, but a race on the WR signal for
external ram interface etc.

Del Cecchi

unread,
Jan 14, 2007, 6:10:03 PM1/14/07
to

"jacko" <jacko...@gmail.com> wrote in message
news:1168813575....@s34g2000cwa.googlegroups.com...
Are you guys talking about classic synchronous logic race conditions, or
some sort of logical microarchitecture race? In the former case, modern
timing tools are pretty good at finding those.


Eric Smith

unread,
Jan 20, 2007, 12:47:18 AM1/20/07
to
ChrisQuayle wrote:
> Do you have any other info or
> pointers to the 3800/3804 series ?...

Unfortunately not. It's *really* tough to find information on 1960s
and early 1970s chips now.

toby

unread,
Jan 20, 2007, 6:09:26 AM1/20/07
to

Is anyone really surprised, in this dumpster-happy system?

It's pretty tough to find information on chips released last week, too.
(For somewhat different reasons.)

</grump>

Don Lindsay

unread,
Jan 21, 2007, 6:59:53 PM1/21/07
to
On 2007-01-14, Peter "Firefly" Lund <fir...@diku.dk> wrote:
> It's
> hard to create race conditions for an in-order pipelined machine with only
> one clock domain. Almost every signal flows in the same direction and
> each pipeline stage can be verified in isolation.

IBM's first commercial RISC MPU (back before POWER 1) turned out to
have a bug with fetchahead when the fetch crossed a VM page boundary.

In general, the further apart the things that interact, the more funny
issues there are. Aside from different clock domains, and bit skews,
remote units may not even have identical voltage and temperature.
Seymour Cray's designs were notorious for the money spent preventing
ground bounce and signal reflection. So there's no shortage of ways to
screw up, and my hat is off to every designer of every reliable
product in the world.

---
Don

Peter "Firefly" Lund

unread,
Jan 22, 2007, 5:40:54 AM1/22/07
to
On Sun, 21 Jan 2007, Don Lindsay wrote:

> On 2007-01-14, Peter "Firefly" Lund <fir...@diku.dk> wrote:
>> It's
>> hard to create race conditions for an in-order pipelined machine with only
>> one clock domain. Almost every signal flows in the same direction and
>> each pipeline stage can be verified in isolation.
>
> IBM's first commercial RISC MPU (back before POWER 1) turned out to
> have a bug with fetchahead when the fetch crossed a VM page boundary.

Yep, that's a nasty one.

Page boundary crossings often have bugs. They are just not usually race
conditions.

-Peter

0 new messages