Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Randomized Address and Data Buses

68 views
Skip to first unread message

Mark Thorson

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

All this talk about Gray code got me to thinking about
weird stuff like using Gray code for the address bus,
so that the minimum number of drivers would change state
when accessing sequential addresses. That would minimize
power consumption, crosstalk among the bus wires, and
emitted EMI.

But then I started thinking if that's really what you want
to do, what's the best way to do it?

I think the answer is a sort of dynamically randomized code.

Imagine that instead of having, say, 16 bits of address
and 16 bits of data, you had 20 bits for each bus. Also,
imagine that you didn't use all the possible encodings.
You would only need to use one-sixteenth of all the encodings
to get the same functionality as 16 bits.

Some encodings, like 00000 and FFFFF, you wouldn't use because
they turn on (or off) all the drivers. To reduce ground
bounce (or power rail bounce), you'd favor encodings closer
to 50% ones and 50% zeroes.

But the neat trick would be to have multiple 20-bit codes
that map to the same 16-bit code. For example, both EF038
and 56AA0 could map to 0001. That way, any time you have
to send stuff down either the address or data bus, you have
a bunch of choices about what 20-bit code to drive.

By "dynamic randomizing", I mean that you choose among the
possible choices for the best encoding, which generally will
be the one that has the fewest number of bit transitions from
the previous bus cycle. For example, if I want to send 0001
and the last cycle was 44AE2, then 56AA0 would be preferred
over EF038.

There would be other considerations besides just the number
of bit transitions. If one encoding has 7 bit transitions
and the other has 8, I still might choose the one with 8
if it took me closer to the 50/50 ratio of ones to zeroes.

As you go to larger word sizes, it might be hard to get
a good fit between the pattern of ones and zeroes for the
current cycle as compared to the previous cycle, i.e. most
comparisons will come close to 50% of the bits making a
transition. For this reason, it might be best to apply this
technique on a per byte lane basis, rather than across the
whole bus.

Can anyone see a reason to shoot this idea down right away?
Or, has it already been thought of?

Mark W Brehob

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Mark Thorson (e...@netcom.com) wrote:
: Can anyone see a reason to shoot this idea down right away?

: Or, has it already been thought of?


[Disclaimer: I know very little about this area!!!!]
[Oh, and I got a bit silly, this might actually make for a
good homework assignment for some class. Its fun for sure.]

For much of what you are discussing I think you would need to
worry about worst case performance (# of bits switched might
be a good high-level approximation). So the question would be:
What would be the worst case # of bit switches?

Doing a 2 bit encoding in 3 bits it is trivial to set values
where the worst case switch is "1".

00: 000 111
01: 010 101
10: 100 011
11: 110 001

Doing some quick coding I developed the following lower bound
on the number of bits you need to switch:
SIZE=2, extra=1, max=1
SIZE=2, extra=2, max=1
SIZE=4, extra=1, max=2
SIZE=4, extra=2, max=2
SIZE=8, extra=1, max=3
SIZE=8, extra=2, max=3
SIZE=16, extra=1, max=5
SIZE=16, extra=2, max=4
SIZE=32, extra=1, max=7
SIZE=32, extra=2, max=7
SIZE=64, extra=1, max=11
SIZE=64, extra=2, max=11

(So the first line reads that with a 2 bit encoding and 1 extra bit
you cannot do better than having to switch 1 bit. I don't promise
that encoding exist which can do this well, only that no encoding
can do better. It is interesting to note that adding extra bits
beyond 1 may not help too much.)

Lastly there exists an upper bound on the worst case:
if we use only one extra bit and have each encoding
be an inverse of its "mate" then we find that our worst case
change must be floor(SIZE+extra)/2. (if a given encoding generated
something worse than this we should be using the inverse).

So for the case of our orginal encoding using 64 bits we know
we can reduce the max number of switches to 32. We also know
that no encoding can do better than getting it down to 11. (assuming
I made no mistakes).

I include the code I used below. Basically I'm counting how many
different numbers I can generate by doing K bit switches on a given
number and stoping when that is greater than 2^SIZE.

(If anyone actually cares I can explain my thoughts better.
I should however be doing something else... :-) )


-----------------------------------


main()
{
int i,j,y,k;
double count;
double max=2;
double g;
// We know that the number of encoding by swapping
// N bits is 1+Y+Y(Y-1) ... (Y!/(Y-N)!)
// We just keep going until that number is greater or
// equal to the number of encodings we need.
for(i=2;i<=64;i*=2)
{
max=1.0;
for(j=0;j<i;j++)
max=max*2;
for(j=1;j<3;j++)
{
y=i+j;
count=1;
g=1;
for(k=1;k<=y;k++)
{
g=g*(y-(k-1));
count=count+g;
if(count>=max)
{
printf("SIZE=%d, extra=%d, max=%d\n",
i,j,k);

break;
}
}
}
}
}

Raymond Aubin

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

In article <6iq12a$spi$1...@msunews.cl.msu.edu>,

Mark W Brehob <bre...@cps.msu.edu> wrote:
>
>For much of what you are discussing I think you would need to
>worry about worst case performance (# of bits switched might
>be a good high-level approximation). So the question would be:
>What would be the worst case # of bit switches?
[cutting out fancy program and output]

Hmm, didn't we just have a thread on gray/grey code?

>Lastly there exists an upper bound on the worst case:
>if we use only one extra bit and have each encoding
>be an inverse of its "mate" then we find that our worst case
>change must be floor(SIZE+extra)/2. (if a given encoding generated
>something worse than this we should be using the inverse).
>
>So for the case of our orginal encoding using 64 bits we know
>we can reduce the max number of switches to 32. We also know
>that no encoding can do better than getting it down to 11. (assuming
>I made no mistakes).

I thought gray code has (extra=0, max=1) in your terminology (and
that was done without any programs).


--
Stanle...@pobox.com (613) 763-2831
Me? Represent other people? Don't make them laugh so hard.

Mark W Brehob

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Raymond Aubin (sc...@bnr.ca) wrote:

: I thought gray code has (extra=0, max=1) in your terminology (and


: that was done without any programs).

Only on increment. I'm looking at the _worst_ arbitrary change.

Lets assume we need to be able to represent 4 different values on a bus.
Call them 0,1,2, and 3. Assuming we need to send this value in
one "clock tick", we would normally use 2 wires to send the signals.
and treat various encodings on that wire as the given value.

For example:
00 ->0
01 ->1
11 ->2
10 ->3

We notice however that if we were writing a 0 (00) and then needed
to write a 2 (11) to the bus we would see two wires change state.

if we used this 3-wire encoding:

bit pattern value
=========== ======
000 or 111 -> 0
001 or 110 -> 1
010 or 101 -> 2
011 or 100 -> 3

Then no matter what bit pattern we are currently writing we can
represent the ANY OTHER value by changing exactly one bit.

What I did in my previous post was bound just how much this can
buy you in terms of reducing the WORST CASE number of bit switches.

Clearly Gray code will work wonders with no extra bits if you are
restricted to increment/decrement. Very few busses have this property
though.

Mark

: --

Jim Battle

unread,
May 6, 1998, 3:00:00 AM5/6/98
to

Mark W Brehob wrote:
>
...

> Doing some quick coding I developed the following lower bound
> on the number of bits you need to switch:
> SIZE=2, extra=1, max=1
> SIZE=2, extra=2, max=1
> SIZE=4, extra=1, max=2
> SIZE=4, extra=2, max=2
> SIZE=8, extra=1, max=3
> SIZE=8, extra=2, max=3
> SIZE=16, extra=1, max=5
> SIZE=16, extra=2, max=4
> SIZE=32, extra=1, max=7
> SIZE=32, extra=2, max=7
> SIZE=64, extra=1, max=11
> SIZE=64, extra=2, max=11

I'll take it as a given that what you say is true. I just
want to further this notion a bit.

Larger word sizes are more efficient, it would seem, so
what you really want to do is operate on blocks of data.
Even if it isn't practical, you can pair the address and
data busses and code them together. For example, say 16b
processor has a 16b address and 16b data bus. Coding them
independently would require 34-36 wires and would have 8 to
10 transitions. If you coded them as a single 32b datum,
it would require 33 wires and would have 7 transitions.

Anyway, to really do things right, you want your interface
to "look ahead" and minimize the total number of transitions
of a sequence, rather than minimizing the distance from the
current code to the next. This starts to look like viterbi
decoding.

Mark Thorson

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <355107DA...@chromatic.com>,

Jim Battle <j...@chromatic.com> wrote:
>
>Larger word sizes are more efficient, it would seem, so
>what you really want to do is operate on blocks of data.

But larger numbers of bits are more likely to average to
50% bit transitions without shenanigans, if you assume
the source of data is unbiased. Obviously it isn't unbiased,
but I would guess the bias is different between address and
data, and different among different types of data.

To take advantage of anomalies farther away from 50%,
I think you need to keep the bus being encoded small,
like a byte or a nibble.

It's like if I toss a coin many times, what's the chance
it will come up heads 75% of the time? The fewer tosses
there are in the sample, the more likely I'll be farther
away from 50%.

>Even if it isn't practical, you can pair the address and
>data busses and code them together. For example, say 16b
>processor has a 16b address and 16b data bus. Coding them
>independently would require 34-36 wires and would have 8 to
>10 transitions. If you coded them as a single 32b datum,
>it would require 33 wires and would have 7 transitions.

You have astutely noticed that one additional wire added
to the bus being modified gets you most of the benefits.
In addition to being about as efficient as more than one
wire, and requiring fewer pins, it is also easier to implement.
I was assuming ROM tables everywhere to do the encoding.
With only one wire, the encoder is a row of XOR gates,
or maybe something simpler if the data is available as both
true and complement from the previous stage. The conversion
can probably be buried in some other clock phase, so zero
time penalty seems likely to me.

How does the equation change if high pin-count becomes cheap?
Lots of technologies like flip-chip, chip-scale packages, etc.
are being developed for high-end stuff, but less aggressive
implementations may be possible for low-end stuff like
PalmPilots. If spilling 4 or 8 pins reduces your power consumption
and RFI, that could easily be worth it.

>Anyway, to really do things right, you want your interface
>to "look ahead" and minimize the total number of transitions
>of a sequence, rather than minimizing the distance from the
>current code to the next. This starts to look like viterbi
>decoding.

Not possible! You don't know what address the next cache
miss will read, otherwise you'd fetch it in advance.
This would only work for data shovel applications, where
you knew in advance you were going to be moving some big
block.

Terje Mathisen

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Mark Thorson wrote:
>
> All this talk about Gray code got me to thinking about
> weird stuff like using Gray code for the address bus,
> so that the minimum number of drivers would change state
> when accessing sequential addresses. That would minimize
> power consumption, crosstalk among the bus wires, and
> emitted EMI.
[snip]

> Can anyone see a reason to shoot this idea down right away?
> Or, has it already been thought of?

I'd say this idea has very definitely been thought of, but in a very
different context:

What you described (extra bits, multiple possible encodings, optimize
for state changes) sounds an awful lot like what I learned about Trellis
encoding, which is what your modem uses to optimize the chance of
properly decoding the new state.

Terje

--
- <Terje.M...@hda.hydro.com>
Using self-discipline, see http://www.eiffel.com/discipline
"almost all programming can be viewed as an exercise in caching"

dm...@ragnet.demon.co.uk

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <eeeEsJ...@netcom.com>#1/1,

e...@netcom.com (Mark Thorson) wrote:
>
> All this talk about Gray code got me to thinking about
> weird stuff like using Gray code for the address bus,
> so that the minimum number of drivers would change state
> when accessing sequential addresses. That would minimize
> power consumption, crosstalk among the bus wires, and
> emitted EMI.

Yup, that's what people want.

> But then I started thinking if that's really what you want
> to do, what's the best way to do it?
>
> I think the answer is a sort of dynamically randomized code.

[snip]

>
> But the neat trick would be to have multiple 20-bit codes
> that map to the same 16-bit code. For example, both EF038
> and 56AA0 could map to 0001. That way, any time you have
> to send stuff down either the address or data bus, you have
> a bunch of choices about what 20-bit code to drive.
>
> By "dynamic randomizing", I mean that you choose among the
> possible choices for the best encoding, which generally will
> be the one that has the fewest number of bit transitions from
> the previous bus cycle. For example, if I want to send 0001
> and the last cycle was 44AE2, then 56AA0 would be preferred
> over EF038.
>
> There would be other considerations besides just the number
> of bit transitions. If one encoding has 7 bit transitions
> and the other has 8, I still might choose the one with 8
> if it took me closer to the 50/50 ratio of ones to zeroes.
>
> As you go to larger word sizes, it might be hard to get
> a good fit between the pattern of ones and zeroes for the
> current cycle as compared to the previous cycle, i.e. most
> comparisons will come close to 50% of the bits making a
> transition. For this reason, it might be best to apply this
> technique on a per byte lane basis, rather than across the
> whole bus.
>

> Can anyone see a reason to shoot this idea down right away?
> Or, has it already been thought of?
>

Parts of it have already been thought of, in the most recent Journal of Solid
State Circuits a bunch of IC designers proposed a scheme to invert half of
the signals on a bus to make the overal bit ratio closer to 1:1. They have
exactly the same goal as you. There idea needed an extra pair of bits to
indicate which part half of the bus had been inverted.

I think the analysis in this and the followup posting is better. The technique
you guys propose is scaleable and give a known benefit (well at least
bounded).

As to the transistion vs. 1:1 bit ratio I guess that a particular design could

seek to go with either, depending on what speed/power tradeoffs the design
requires.

Duncan


-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading

Amitabh Menon

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Yes and that's why gray coding is not used on all kinds of busses,
usually only on buses that have mostly increment/decrement value
patterns eg. address busses which have different patterns on
discontinuities only.

For buses that have more random values like data busses, there are
other coding techniques like bus-invert coding (you invert all the
bits of the bus to reduce the number of toggles from the previous bus
value and send out a invert_on signal as well) have been proposed that
give you significant switching reduction with just one more signal and
not much hardware to implement the conditional inversion.

If you look closely the 3 bit example you have given here does exactly
that for the original 2 bit bus with the MSB of the 3 bit bus being
the invert_on signal. And as you say correctly, just one extra bit is
required, addditional bits don't quite help. So this scheme is really
the same as that of bus invert coding if the number of extra bits
added is kept limited to one for any original bus size. Bus invert
coding though, was proposed a few years back. ;)

cheers,
-Amitabh

Tim Shoppa

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <eeeEsK...@netcom.com>, Mark Thorson <e...@netcom.com> wrote:
>How does the equation change if high pin-count becomes cheap?
>Lots of technologies like flip-chip, chip-scale packages, etc.
>are being developed for high-end stuff

DEC's Flip Chip technology - developed in the 1960's -
is being reconsidered for modern "high-end" stuff? How
does the modern implementation differ from the 1960's implemenation?
And since DEC owns the trademark on "Flip Chip", what do other
manufacturers call this technology?

Tim. (sho...@triumf.ca)

Patrick Kling

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Mark, I like your idea.

Just applying a Grey code to the least significant bits
of the address bus might get most of the power savings
without the complexity of grey encoding the entire address.
When doing a cache line fill the average number of state
transitions would be cut almost in half.

Pat Kling


In article <355140...@hda.hydro.com> Terje Mathisen <Terje.M...@hda.hydro.com> writes:

> From: Terje Mathisen <Terje.M...@hda.hydro.com>
> Newsgroups: comp.arch
> Date: Thu, 07 May 1998 07:03:08 +0200
> Organization: Hydro


>
> Mark Thorson wrote:
> >
> > All this talk about Gray code got me to thinking about
> > weird stuff like using Gray code for the address bus,
> > so that the minimum number of drivers would change state
> > when accessing sequential addresses. That would minimize
> > power consumption, crosstalk among the bus wires, and
> > emitted EMI.

> [snip]


> > Can anyone see a reason to shoot this idea down right away?
> > Or, has it already been thought of?
>

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

From article <6isiic$nvv$1...@nntp.ucs.ubc.ca>, by sho...@alph02.triumf.ca (Tim Shoppa):

>
> DEC's Flip Chip technology - developed in the 1960's -
> is being reconsidered for modern "high-end" stuff?

Yup, it has been in use for a few years.

> How does the modern implementation differ from the 1960's implemenation?

It doesn't. The term Flip Chip refers to flipping the semiconductor chip
over, so it's top surface is facing the mounting substrate. The mounting
substrate has a conductive pattern on its surface, with solder bumps (or
equivalent) aligned with the contact pads on the surface of the
semiconductor wafer. When the chip is in position, with its pads resting
gently on the solder bumps, an ultrasonic probe is applied to the chip.
The vibration melts the solder, and this bonds the chip to the substrate.

You can see nice photos of this process with diode chips in the oldest
series of DEC handbooks, and essentially the same thing is being done with
modern VLSI chips.

The advantage is that it eliminates the individual bonding wires used for
each I/O pin on a conventional chip mounting (where the chip bottom is
soldered to a support pad, and then a wire bonding machine knits the
connections, one at a time, robotically). This makes bulk termination
inexpensive, and it works just fine as long as the substrate and chip have
comparable rates of thermal expansion.

> And since DEC owns the trademark on "Flip Chip", what do other
> manufacturers call this technology?

I wouldn't be surprised if DEC got a small bit of cash for the use of the
name. As of 1976, DEC was still protecting the FLIP-CHIP trademark, but my
1987 DECdirect catalog and price-list doesn't list it. I get the feeling
that DEC had abandoned some of their early trademarks by then (for example,
they list PDP-11 as a trademark in 1987, but they no-longer list PDP).

Doug Jones
jo...@cs.uiowa.edu

>
> Tim. (sho...@triumf.ca)

Jim Battle

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

Mark Thorson wrote:
>
> In article <355107DA...@chromatic.com>,
> Jim Battle <j...@chromatic.com> wrote:
> >
> >Larger word sizes are more efficient, it would seem, so
> >what you really want to do is operate on blocks of data.
>
> But larger numbers of bits are more likely to average to
> 50% bit transitions without shenanigans, if you assume
> the source of data is unbiased. Obviously it isn't unbiased,
> but I would guess the bias is different between address and
> data, and different among different types of data.

Well, first my comments were based solely on the assumption that
the table of #bit transitions posted by Mark W Brehob was correct
and was the actual figure, not just a bound, of the number of bits
that must switch.

> To take advantage of anomalies farther away from 50%,
> I think you need to keep the bus being encoded small,
> like a byte or a nibble.
>
> It's like if I toss a coin many times, what's the chance
> it will come up heads 75% of the time? The fewer tosses
> there are in the sample, the more likely I'll be farther
> away from 50%.

I don't understand the comment. For CMOS, what matters for
power isn't the number of one bits or zero bits, but the
number of transitions. In your original post you said that
you wouldn't use the encodings 0x00000 or 0xFFFFF since all
bits would have to change; but that is true only if you were
at one or the other and didn't have a different (redundant)
encoding to choose.

For instance, switching between 0xAAAA and 0x5555 is just
as bad, in terms of power expenditure, as switching between
0x0000 and 0xFFFF. Ground bounce is another matter. :-)

...


> >Anyway, to really do things right, you want your interface
> >to "look ahead" and minimize the total number of transitions
> >of a sequence, rather than minimizing the distance from the
> >current code to the next. This starts to look like viterbi
> >decoding.
>
> Not possible! You don't know what address the next cache
> miss will read, otherwise you'd fetch it in advance.
> This would only work for data shovel applications, where
> you knew in advance you were going to be moving some big
> block.

I think it would be possible, just not worthwhile. In any
high performance uP, there is a queue of outstanding read & write
transactions. This effectively gives you lookahead where you can
work your recoding magic. If the miss rate is low and the queue
doesn't have more than one entry, you encode without lookahead,
but the loss of power efficiency isn't so bad because you have
a low transaction rate. In code where you are memory bound, there
are more transactions and thus more need to be efficient, and
conveniently there will tend to be more items in the LD/ST queue.

Del Cecchi

unread,
May 7, 1998, 3:00:00 AM5/7/98
to Tim Shoppa

In article <6isiic$nvv$1...@nntp.ucs.ubc.ca>,
sho...@alph02.triumf.ca (Tim Shoppa) writes:
[snip]

|>
|> DEC's Flip Chip technology - developed in the 1960's -
|> is being reconsidered for modern "high-end" stuff? How

|> does the modern implementation differ from the 1960's implemenation?
|> And since DEC owns the trademark on "Flip Chip", what do other
|> manufacturers call this technology?
|>
|> Tim. (sho...@triumf.ca)

DEC's flip chip? I am not sure what you are talking about. Bell Labs/western
electric had some kind of "flip chip" but arguably the most famous is the solder
ball system used by IBM. The first one used on discrete transistors in the early
60s used tiny copper balls. Later the C4 Controlled Collapse Chip
Connection system was
invented for use on components with more than 3 balls.

This was the forerunner of today's solder ball connection systems for chips and
substrates and has been in continual use inside IBM since the 60s. IBM calls it
C4 and we were doing it before DEC. What were the first PDP-11s made out of?

IBM calls it C4. Today the balls are smaller and are in an array.
--

Del Cecchi
cecchi@rchland

Mark Thorson

unread,
May 7, 1998, 3:00:00 AM5/7/98
to

In article <355219A5...@chromatic.com>,

Jim Battle <j...@chromatic.com> wrote:
>
>I don't understand the comment. For CMOS, what matters for
>power isn't the number of one bits or zero bits, but the
>number of transitions. In your original post you said that
>you wouldn't use the encodings 0x00000 or 0xFFFFF since all
>bits would have to change; but that is true only if you were
>at one or the other and didn't have a different (redundant)
>encoding to choose.

That's not what I said. I was suggesting not using 00000
or FFFFF because all the drivers on either the ground side
or the power side would be on. I've seen that kind of thing
contribute to noise on the power rails from a TTL tri-state
buffer, so I figured it might do something similar in the
buffers of a pad ring. I was proposing to choose encodings
near 50%, to average out the power, so the power rails wouldn't
see wide fluctuations in the static current.

>For instance, switching between 0xAAAA and 0x5555 is just
>as bad, in terms of power expenditure, as switching between
>0x0000 and 0xFFFF. Ground bounce is another matter. :-)

Bit transitions are a completely different issue. They're
bad too. Worse, probably. They contribute toward power
consumption and RFI emissions. The dynamic power probably
swamps any static power issues, so maybe bit transitions are
all that should be considered.

Scott Hess

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

In article <355140...@hda.hydro.com>,

Terje Mathisen <Terje.M...@hda.hydro.com> writes:
Mark Thorson wrote:
> All this talk about Gray code got me to thinking about weird
> stuff like using Gray code for the address bus, so that the
> minimum number of drivers would change state when accessing
> sequential addresses. That would minimize power consumption,
> crosstalk among the bus wires, and emitted EMI.
[snip]
> Can anyone see a reason to shoot this idea down right away? Or,
> has it already been thought of?

I'd say this idea has very definitely been thought of, but in a
very different context:

What you described (extra bits, multiple possible encodings,
optimize for state changes) sounds an awful lot like what I learned
about Trellis encoding, which is what your modem uses to optimize
the chance of properly decoding the new state.

Why not run something like Trellis encoding over traces on the bus?
Admittedly, putting a data pump on the CPU and RAM probably isn't
going to help latency, but are there simpler variants which can be
hard-wired? After all, traces are inches long, phone lines miles
long, and have echo cancellers and various sources of noise.
Presumably full Trellis encoding would be heavy overkill on a
motherboard If you could get four bits per baud, you could run one
quarter the traces at the same clock, or the same number of traces at
one quarter the clock, or the same number of traces at the same clock
transferring four times the data.

If the hook is that the address on the bus tends to increment by
words, you could have each end of a transaction predict that the next
address as the previous address plus one word width, have the sender
send the actual address XOR the predicted address, and the receiver
XOR the received address with the predicted address to get back the
actual address. For example, if the actual requests were for
addresses "0000", "0001", "1000", "1001", "1010", "1011", the address
bus would see "0000", "0000", "1000", "0000", "0000", "0000".

If the idea is to reduce the total number of transitions between 0 and
1, you could latch what was previously received over the address bus
and XOR that into the new address on both ends. So if the actual
requests went as "0000", "0001", "1000", "1001", "1010", "1011", the
transferred requests might look like "0000", "0000", "1000", "1000",
"1000", "1000".

Branch locality would seem to indicate that there would tend to be
more low-order bits set than high-order bits, so it might also be
worthwhile to shuffle the bits to blunt the EMI (I'm guessing that
it's more of a problem if the left 8 bits in a 64-bit bus are busy
constantly than if only every eighth bit is busy). Perhaps the
shuffle could also vary over time, so that the low-order transitions
would tend to distribute across the traces more evenly.

[Please be kind with flames about my ignorance - I'm only an
interested bystander in this stuff,]
--
scott hess <sc...@doubleu.com> (408) 739-8858 http://www.doubleu.com/
<Favorite unused computer book title: The Compleat Demystified Idiots
Guide to the Zen of Dummies in a Nutshell in Seven Days, Unleashed>

Tim Bradshaw

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

* Del Cecchi wrote:

> This was the forerunner of today's solder ball connection systems
> for chips and substrates and has been in continual use inside IBM
> since the 60s. IBM calls it C4 and we were doing it before DEC.
> What were the first PDP-11s made out of?

I think that the DEC flip chips were very small (component wise) ICs
with one or a couple of flip-flops on which were used to make early
DEC machines, nothing to do with connection systems at all.

(Actually
they may have been thin-film or even little boards with
transistors on, I don't think I've ever seen one!)

--tim

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

From article <6isrjk$11kc$1...@news.rchland.ibm.com>, by cec...@signa.rchland.ibm.com (Del Cecchi):
>
> DEC's flip chip? I am not sure what you are talking about. ...
> ... has been in continual use inside IBM since the 60s. IBM calls it

> C4 and we were doing it before DEC. What were the first PDP-11s
> made out of?

PDP-11s were irrelevant. That's a machine built in 1970, using TTL ICs
in conventional DIP packages soldered to two-sided boards. The term
Flip Chip was indeed used for those boards, but that's a carryover
(to keep the rights to the trademark) from the days when DEC actually
used flip-chip technology.

The original DEC flip-chip modules were used in the DEC PDP-8, PDP-9 and
PDP-10. These were discrete transistor machines, and the actual use
of flip-chip mounted semiconductor components was extremely limited,
but DEC was very proud of the technology.

I have a copy of DEC's "Flip Chip (TM) Modules" book, dated 9/64.
This shows the prototypical DEC Flip Chip module on the cover, a
single-sided circuit board 5.5 by 2.5 inches, with transistors and
Flip Chip semiconductor modules mounted on it. The Flip Chip
semiconductor packages are 1 inch by 5.16 inch by 3/16 inch epoxy
coated alumina modules with 6 pins on 0.2 inch centers. Each Flip
Chip semiconductor module contains some mix of diodes (bonded to the
alumina substrate using Flip-Chip bonding) and resistors silk-screened
on the substrate along with the conducting layer. Page 8 of the
booklet shows the silk-screen setup, with uncoated substrates being
hand-placed in a jig for printing, and a row of women hand-placing
diode chips on the substrates for "thermal compression bonding".

In sum, this is a technology that went to market before the
IBM System 360.

It may be that IBM developed something similar at about the same time.
It is clear that DEC abandoned this technology! They never put
transistor chips in their flip-chip packages. They always used off-the
shelf discrete transistors, with flip-chip packages off to the side.

Doug Jones
jo...@cs.uiowa.edu

Mike Albaugh

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

Mark Thorson (e...@netcom.com) wrote:
: In article <355219A5...@chromatic.com>,
: Jim Battle <j...@chromatic.com> wrote:
: [...] I was suggesting not using 00000

: or FFFFF because all the drivers on either the ground side
: or the power side would be on. I've seen that kind of thing
: contribute to noise on the power rails from a TTL tri-state
: buffer, so I figured it might do something similar in the
: buffers of a pad ring. I was proposing to choose encodings
: near 50%, to average out the power, so the power rails wouldn't
: see wide fluctuations in the static current.

Another approach, which seems to me (armchair observer
though I am :-) to be a lot simpler, is the "address stepping"
used (or merely allowed? ) on PCI busses. I have a hard time
believing that any sort of logic for re-coding would add less
delay than simply (! :-) switching the bus-drivers at slightly
different times. If "total driving power" (avoid all zeros
or ones) is an issue, rather than transitions, then of course
"address stepping" doesn't help. OTOH, "total driving power"
is a lot more amenable to low-tech solutions like having
adequate supply lines, no? The issue of transient power can
be more difficult, yet it _seems_ to me that address-stepping
is a "step" in the right direction. :-) I'd like to see opinions
from the better-informed members of the group on this.

: Bit transitions are a completely different issue. They're


: bad too. Worse, probably. They contribute toward power
: consumption and RFI emissions. The dynamic power probably
: swamps any static power issues, so maybe bit transitions are
: all that should be considered.

My gut feeling, exactly. Cracker-barrel tale: Long ago
and about 100 miles away, I worked for a company that made
Telco equipment. One of the selling-points of our signalling gear
was that each unit (typically used in "banks" of 24) had its
own oscillator. Most of the competition had a "master oscillator"
per bank. The outputs fed FDM transmission gear, sometimes microwave.
The fact that our units were _not_ coherent helped keep both RF
power and inter-channel interference down.

Mike
| alb...@agames.com, speaking only for myself

Dave Brockman

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

Tim Bradshaw wrote:

> I think that the DEC flip chips were very small (component wise) ICs
> with one or a couple of flip-flops on which were used to make early
> DEC machines, nothing to do with connection systems at all.
>
> (Actually
> they may have been thin-film or even little boards with
> transistors on, I don't think I've ever seen one!)
>
> --tim

I was just browsing thru my 7/65 DEC Flip Chip catalog. Not much was
said about what a flip chip was. There were some pictures, though, of
the manufacturing process. One showed a little ceramic substrate on to
which diodes were being placed. From the discussion here, this must be
the fabled "flip chip."

The catalog dealt with Flip Chip modules which were actually little
circuit cards. The modules in the catalog, I believe, had discrete
transistors, not on the ceramic substrate. Maybe this came later?

I am a fairly old DEC hand but I never knew why the little cards were
called Flip Chip modules. Before my time, I guess. I've still got my
PDP-8/S which is just chock full of them, however.

**************************
Dave Brockman/Portable
da...@oz.net
http://www.oz.net/~daveb
**************************

Dr. Peter Kittel

unread,
May 8, 1998, 3:00:00 AM5/8/98
to

In article <6it2mj$bs8$1...@flood.weeg.uiowa.edu> jo...@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:
>From article <6isiic$nvv$1...@nntp.ucs.ubc.ca>, by sho...@alph02.triumf.ca (Tim Shoppa):
>>
>> DEC's Flip Chip technology - developed in the 1960's -
>> is being reconsidered for modern "high-end" stuff?
>
>Yup, it has been in use for a few years.
>
>> How does the modern implementation differ from the 1960's implemenation?
>
>It doesn't. The term Flip Chip refers to flipping the semiconductor chip
>over, so it's top surface is facing the mounting substrate. The mounting
>substrate has a conductive pattern on its surface, with solder bumps (or
>equivalent) aligned with the contact pads on the surface of the
>semiconductor wafer.

Aha, so this sounds very much like the BGA (Ball Grid Array) package
used by Motorola for its PPC chips. In fact this BGA has it in two
steps, the chip itself is mounted this way to a bigger ceramic substrate,
which carries on its underside the BGA which is then soldered to the PCB.

> When the chip is in position, with its pads resting
>gently on the solder bumps, an ultrasonic probe is applied to the chip.
>The vibration melts the solder, and this bonds the chip to the substrate.

Hmm, so there's a difference. BGA is soldered with normal SMD techniques.

>The advantage is that it eliminates the individual bonding wires used for
>each I/O pin on a conventional chip mounting (where the chip bottom is
>soldered to a support pad, and then a wire bonding machine knits the
>connections, one at a time, robotically). This makes bulk termination
>inexpensive, and it works just fine as long as the substrate and chip have
>comparable rates of thermal expansion.

Yup, same for BGA.

--
Best Regards, Dr. Peter Kittel // http://www.pios.de of PIOS
Private Site in Frankfurt, Germany \X/ office: peterk @ pios.de


Ken Smith

unread,
May 9, 1998, 3:00:00 AM5/9/98
to

In article <3553E747...@oz.net>, Dave Brockman <da...@oz.net> wrote:
>Tim Bradshaw wrote:
>
>> I think that the DEC flip chips were very small (component wise) ICs
[....]

Perhaps this is a case of a different technology getting the sam name but,
many years ago there was such a thing as a "flip-chip" socket. It looked
like a wire wrap socket with the pins bent around and up besid the
package. You could wire your circuit from the top side instead of
constantly havinf to flip the board over to wire wrap. It made much
nicer looking and easier to mount circuits.
--
--
kens...@rahul.net forging knowledge


John R Levine

unread,
May 9, 1998, 3:00:00 AM5/9/98
to

>PDP-11s were irrelevant. That's a machine built in 1970, using TTL ICs
>in conventional DIP packages soldered to two-sided boards. The term
>Flip Chip was indeed used for those boards, but that's a carryover
>(to keep the rights to the trademark) from the days when DEC actually
>used flip-chip technology.
>
>The original DEC flip-chip modules were used in the DEC PDP-8, PDP-9 and
>PDP-10. These were discrete transistor machines, and the actual use
>of flip-chip mounted semiconductor components was extremely limited,
>but DEC was very proud of the technology.

According to "Computer Engineering" by Bell et al., the Flip Chips
were first used in the PDP-7.

Up through the PDP-6 they'd used larger "system modules" which were 5"
or 10" square with metal frames. The flip chips were 2.5" by 5" or a
double one was 5" by 5". Flip chips were small enough that they
didn't need a frame, and the backplane into which they were plugged
could be automatically wire-wrapped by Gardner-Denver equipment, a
major cost saving. The cards themselves weren't unusual, conventional
components soldered onto conventional PC boards which happened to have
one end cut to plug into a socket and the other end had a plastic
handle riveted on.

The flip-chips had small amounts of logic, a few flip-flops, or a
relatively complex one contained one bit of the PDP-8's accumulator,
so they built their computers out of higher level building blocks than
single components.

I've also heard that the system modules had connector problems which
made systems unreliable. The standard way to fix a flaky PDP-6 was to
open the back of the cabinet and whack the cards with a rubber mallet.
DEC got those problems under control later, and the PDP-8/A and 11/45
used big modules again.

--
John R. Levine, IECC, POB 727, Trumansburg NY 14886 +1 607 387 6869
jo...@iecc.com, Village Trustee and Sewer Commissioner, http://iecc.com/johnl,
Member, Provisional board, Coalition Against Unsolicited Commercial E-mail

Joe Morris

unread,
May 9, 1998, 3:00:00 AM5/9/98
to

Dave Brockman <da...@oz.net> writes:

>I am a fairly old DEC hand but I never knew why the little cards were
>called Flip Chip modules. Before my time, I guess. I've still got my
>PDP-8/S which is just chock full of them, however.

...but you still see the pattern of the Flip Chip modules: the
Digital logo with each of the seven letters in its own rectangular
box. This was well-established when I first encountered DEC products
in 1962 (in the form of the PDP-1) and at that time the "common knowledge"
was that the boxes were supposed to represent Flip Chip modules.
I'm not sure that I've ever seen an official DEC statement to that
effect, but it's a reasonable explanation. (Of course, that *was*
36 years ago...)

Lots of DEC users have seen another logo that was derived from the
early DEC products: the outline of the DECUS logo is supposedly
taken from the appearance of the CRT on the PDP-1. The pattern
certainly matches, and the derivation story is told by a one-time
board member of DECUS. Can anyone here support or dispute this?

Joe Morris

Jonathan G Campbell

unread,
May 9, 1998, 3:00:00 AM5/9/98
to Tim Bradshaw

Tim Bradshaw wrote:
>
> * Del Cecchi wrote:
>
> > This was the forerunner of today's solder ball connection systems
> > for chips and substrates and has been in continual use inside IBM
[...]

> I think that the DEC flip chips were very small (component wise) ICs
> with one or a couple of flip-flops on which were used to make early
> DEC machines, nothing to do with connection systems at all.
>
> (Actually
> they may have been thin-film or even little boards with
> transistors on, I don't think I've ever seen one!)

Yes, see:

http://www.cis.ohio-state.edu/hypertext/faq/usenet/dec-faq/pdp8-models/faq-doc-2.html

I remember seeing them around a PDP 8/S I used. A typical one like the
flip-flops you describe, was a printed circuit board about 2 inches by
4, about half the size of a small PC card; I _think_ it was connected
via an edge connector. Since we used the PDP 8 purely as an editor, it
never concerned me how one employed the FLIP CHIPs. Many PDP 8s were
used in laboratories as data-loggers and controllers, but, clearly, at
two flip-flops or a few NAND gates per card, the rack wouln't take long
to fill up!

Of course, DEC started off (1957) as a manufacturer of components
(modules) on cards.

Best regards,

Jon Campbell

--
Jonathan G Campbell Univ. Ulster Magee College Derry BT48 7JL N. Ireland
+44 1504 375367 JG.Ca...@ulst.ac.uk http://www.infm.ulst.ac.uk/~jgc/

Paul Repacholi ( prep )

unread,
May 10, 1998, 3:00:00 AM5/10/98
to

I have heard the same from digits `who should know'. In, I think,
Computer Engineering, I think it goes furthur and points out that it
was taken from the display of the PDP-1B, 1 built, 0 shipped.

--
~paul ( prep ) Paul Repacholi,
1 Crescent Rd.,
erepa...@cc.curtin.edu.au Kalamunda,
+61 (08) 9257-1001 Western Australia. 6076

Mark Thorson

unread,
May 10, 1998, 3:00:00 AM5/10/98
to

In article <6j1to0$3...@ivan.iecc.com>, John R Levine <jo...@iecc.com> wrote:
>
>Up through the PDP-6 they'd used larger "system modules" which were 5"
>or 10" square with metal frames. The flip chips were 2.5" by 5" or a

I believe they were called "system building blocks". When I
started at Tessera, I pointed out that their use of the term
may already be a DEC trademark.

>I've also heard that the system modules had connector problems which
>made systems unreliable. The standard way to fix a flaky PDP-6 was to
>open the back of the cabinet and whack the cards with a rubber mallet.
>DEC got those problems under control later, and the PDP-8/A and 11/45
>used big modules again.

Yeah, the "harp". That was an array of 22 wires connecting the
circuit board to the edge connector. The standard _right_ way
to fix reliability problems was to reflow all the solder joints
in the harp. When I was at UC-Berkeley, the Computer Club did that
to two pdp-5's that had been donated. Anybody know what happened
to Bill Jolitz? He did a lot of that work.

Mark Thorson

unread,
May 10, 1998, 3:00:00 AM5/10/98
to

Unless I'm mistaken, the paper mentioned on the net
was _Journal_of_Solid-State_Circuits_, volume 33, number 5,
pages 702-6, "A Data Transition Look-Ahead DFF Circuit for
Statistical Reduction in Power Consumption". Describes an
output driver flip-flop that inhibits clocking the output
stage when the new state is same as previous state.
(Output stage has large-geometry transistors.) Claims 29%
power reduction for data where bit transitions occur 25%
of the time. (Why doesn't it quote a figure for
worst-average-case, 50%?)

So, that's not really relevant to my original proposal,
but thank you for calling and sharing. :-)

However, in my foraging yesterday, I found some patents
that are relevant, in fact they describe my scheme exactly.

U.S. Patents 5,572,736 and 5,574,921 describe my
original proposal, and the special case of having an
invert bus bit. My scheme assumed the receiver
would be able to decode the expanded encoding without
being told which of multiple encodings was chosen by
the transmitter. The '736 patent would not cover my
don't-tell-the-receiver-which-decode-to-use, because
it requires the transmitter to specify a decoding
algorithm. But the '921 patent requires no such thing,
so my scheme would fall within its scope. You can view
U.S. patents at http://patent.womplex.ibm.com
These two patents are almost identical except for
the claims, so I would suggest not bothering to look
at '736.

It's interesting these patents were filed in 1995.
One would think this scheme would have made sense
10 or 15 years earlier.

Mark W Brehob

unread,
May 11, 1998, 3:00:00 AM5/11/98
to

Mark Thorson (e...@netcom.com) wrote:

And while we are on the topic, I discovered an error in the program
I used to determine the "can't do better than this" case for
encoding. Opps.

The actual table is:

base Extra bits
case 1 2 4 8 16 32 64 128
=== === === === === === === ===
2 1 1
4 2 2 2
8 4 4 3 3
16 8 8 7 6 5
32 16 15 13 12 10 8
64 32 30 28 25 22 19 16
128 64 61 57 53 47 41 35 30

So if you have a 32 bit bus and you add 4 extra bits the best you can hope to
do for "worst case number of switches" is 13. So there exists no scheme which
can use these 36 bits to represent the 2^32 different encodings which has a
worst case number of switches lower than 13.

Similar math can be found in the patents Mark T. refers to. If anyone cares I
can explain how these results were arrived at. (The pattents just mumble, never
justifying their math. And for the record I found my error before Mark found
the patents :-) Because of a heavy use of fp addition to handle large numbers
it is very possible that these numbers are "off by one" due to rounding
problems. I'm tempted to do it in Mathematica to avoid this problem... (It
seems to handle arbitrary percsion math quite nicely)

Notice this does not mean you can do this well, although since we can do
this well for the 1 extra bit case (inversion bit) it seems possible...

Oh, and BTW IBM owns the patents. I really believe that the one bit scheme
will see life in both external busses and internal (to a chip) busses.

Mark Brehob

==================================================
: Unless I'm mistaken, the paper mentioned on the net


: was _Journal_of_Solid-State_Circuits_, volume 33, number 5,

[clip]

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 11, 1998, 3:00:00 AM5/11/98
to

From article <3553E747...@oz.net>, by Dave Brockman <da...@oz.net>:

> I was just browsing thru my 7/65 DEC Flip Chip catalog. Not much was
> said about what a flip chip was. There were some pictures, though, of
> the manufacturing process. One showed a little ceramic substrate on to
> which diodes were being placed. From the discussion here, this must be
> the fabled "flip chip."

> The catalog dealt with Flip Chip modules which were actually little
> circuit cards. The modules in the catalog, I believe, had discrete
> transistors, not on the ceramic substrate. Maybe this came later?

I have a vintage PDP-8 and a large number of Flip Chip modules from
the 1965 era. Many of these modules incorporate Flip Chip semiconductor
components. They were used extensively in Flip Chip modules in the
early part of the PDP-8 production run.

In fact, the classic PDP-8 was actually advertised as an integrated
circuit computer on the strength of the Flip Chip technology! Technically,
a flip-chip semiconductor component is a hybrid integrated circuit, so this
is not a false advertising claim, but the Flip Chip components were about
the smallest scale integrated circuits ever to have gone by that name!
Each Flip Chip component could be directly replaced by discrete components
that occupied the same volume (but in a different form factor).

Note that the IBM System 360 was also sold as an integrated circuit machine
but was also based on hybrid IC packages. These, at least, had about 16
pins, and the packages included multiple transistors, where the original
DEC Flip Chip components were integrated diodes-resistor networks.

Doug Jones
jo...@cs.uiowa.edu

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 11, 1998, 3:00:00 AM5/11/98
to

From article <6j1v16$r...@top.mitre.org>, by jcmo...@mwunix.mitre.org (Joe Morris):

> Dave Brockman <da...@oz.net> writes:
>
>>I am a fairly old DEC hand but I never knew why the little cards were
>>called Flip Chip modules. Before my time, I guess. I've still got my
>>PDP-8/S which is just chock full of them, however.
>
> ....but you still see the pattern of the Flip Chip modules: the

> Digital logo with each of the seven letters in its own rectangular
> box. ...

I'm fairly convinced that the d|i|g|i|t|a|l logo is not a reference to
flip-chip modules, it predates the introduction of such modules by several
years!

But, if you look at DEC's product line from 1961-62 (I have the catalog),
you'll find that the form-factor of the rectangles in the d|i|g|i|t|a|l
logo are a close match to the form-factors in the original Lab modules.
Each lab module was a system module, packaged in a box, with a black
bakelite face plate with bananna jacks in it for patch-cord wiring of
digital systems. Sadly, I don't have any of these, although I have a pile
of system modules.

> I'm not sure that I've ever seen an official DEC statement to that
> effect, but it's a reasonable explanation.

The problem with this explanation is that the color scheme of the DEC
logo completely destroys the illusion. In 1962, for example, DEC's
letterhead had the rectangles in alternating black and teal, and I have
one old DEC power supply with the rectangles in alternating blue and red
(ugly!). Take a look at my web page on DEC logos,

http://www.cs.uiowa.edu/~jones/pdp8/logos/

for an assortment of examples I've collected, including the 1962 example
cited above.
Doug Jones
jo...@cs.uiowa.edu

Joe Morris

unread,
May 11, 1998, 3:00:00 AM5/11/98
to

[mailed and posted]

jo...@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) writes:

> jcmo...@mwunix.mitre.org (Joe Morris):



>> ....but you still see the pattern of the Flip Chip modules: the
>> Digital logo with each of the seven letters in its own rectangular
>> box. ...

>I'm fairly convinced that the d|i|g|i|t|a|l logo is not a reference to
>flip-chip modules, it predates the introduction of such modules by several
>years!

I've had e-mail from another reader saying the same thing as your
comments here. I'm not so stupid to claim that my memory of events
36 years ago is infallible, so I may be mixing up recollections of that
era with the DEC products of a few years later.

OTOH, when I was working with the PDP-1 several of the other hackers
there (in the old, respectible sense of the word!) were DEC designers.
They may have been using the term before it was first used as
the name of an official DEC product, and I could have picked it up
from them.

Joe Morris

Ron G. Minnich

unread,
May 12, 1998, 3:00:00 AM5/12/98
to

Tim Bradshaw (t...@aiai.ed.ac.uk) wrote:
: I think that the DEC flip chips were very small (component wise) ICs

: with one or a couple of flip-flops on which were used to make early
: DEC machines, nothing to do with connection systems at all.
: (Actually
: they may have been thin-film or even little boards with
: transistors on, I don't think I've ever seen one!)

I have some. They're not flip chips in the solder-ball sense. Actually,
I never could figure out the name, since there's absolutely nothing
special about these cards!
ron

--
Ron Minnich |Java: an operating-system-independent,
rmin...@sarnoff.com |architecture-independent programming language
(609)-734-3120 |for Windows/95 and Windows/NT on the Pentium
ftp://ftp.sarnoff.com/pub/mnfs/www/docs/cluster.html


har...@dev/null.netaxis.com

unread,
May 12, 1998, 3:00:00 AM5/12/98
to

Interesting thread.

In article <6j7375$oo8$1...@flood.weeg.uiowa.edu>, jo...@pyrite.cs.uiowa.edu
(Douglas W. Jones,201H MLH,3193350740,3193382879) wrote:

[snip]


> Each lab module was a system module, packaged in a box, with a black
> bakelite face plate with bananna jacks in it for patch-cord wiring of
> digital systems. Sadly, I don't have any of these, although I have a pile
> of system modules.

So what was inside the modules? Individual components, or Flip-flops/logic
gates/etc, or ...?

Anyways, if you're interested in getting these up and running, you can
probably find replacement plugs and hook up wire at any hobby shop that
deals in R/C model aircraft supplies - we use them to connect battery
chargers, engine starters, etc. Assuming of course that they use the type
of banna jack I'm thinking of. HTH.

Push the button, Frank.
-Leander

________________________________________ ___ __ _
Leander Harding III | I have altered my email address to frustrate
harding (at) netaxis (dot) com | automated spam mailers. To reply, remove the
<http://www.netaxis.com/~harding>| "dev/null", or use the adress to the left.

Mike Albaugh

unread,
May 12, 1998, 3:00:00 AM5/12/98
to

harding@dev/null.netaxis.com wrote:
: Interesting thread.

: In article <6j7375$oo8$1...@flood.weeg.uiowa.edu>, jo...@pyrite.cs.uiowa.edu
: (Douglas W. Jones,201H MLH,3193350740,3193382879) wrote:

: [snip]
: > Each lab module was a system module, packaged in a box, with a black
: > bakelite face plate with bananna jacks in it for patch-cord wiring of
: > digital systems. Sadly, I don't have any of these, although I have a pile
: > of system modules.

: So what was inside the modules? Individual components, or Flip-flops/logic
: gates/etc, or ...?

I don't recall the "lab modules" being all that similar to "system
modules". The "lab modules" that I used as a student at Berkeley, circa
1970, were physically as described above, but had DTL and TTL ICs on the
boards themselves. Although one _might_ call the jacks "Banana", I'd be
more likely to call them "pin". The had the little flat springy sides,
but were much smaller in diameter than the sort of jack one would find
on a VTVM. The "System Modules" I used to maintain the terminal MUX on the
XDS-940 (nee SDS-930) used discrete transistors. IIRC, each "UART"
consisted of three system-modules, roughly corresponding to XMT, RCV, CTL.

The comp.arch readers can stop reading now. For the
alt.folklore.computers folks, I'll add a reminiscense of coding
a "bit-boffing" software UART on the Computer-clubs's SS-90
("Solid State", but also containing many vacuum tubes and a drum
main-memory), so that we could have "remote access" via one of
the Hazeltine-2000 CRT terminals. These had core memory and integrated
circuits, thus being arguably "more powerful" than the CPU :-)
(With slightly different software, we also "talked" to a TTY-15,
which was at least the right era... :-)

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 12, 1998, 3:00:00 AM5/12/98
to

From article <harding-1205...@du-3-8.netaxis.com>, by harding@dev/null.netaxis.com:

> In article <6j7375$oo8$1...@flood.weeg.uiowa.edu>, jo...@pyrite.cs.uiowa.edu
> (Douglas W. Jones,201H MLH,3193350740,3193382879) wrote:
>> Each lab module was a system module, packaged in a box, with a black
>> bakelite face plate with bananna jacks in it for patch-cord wiring of
>> digital systems. Sadly, I don't have any of these, although I have a pile
>> of system modules.

> So what was inside the modules? Individual components, or Flip-flops/logic
> gates/etc, or ...?

The System Modules and their cousins (based on the same circuit boards) the
Digital Building Blocks lab module family, included such things as:

A 6 bit latch, suitable for building registers.
6 2-input nand gates.

Multiples of 6 were common. After all, the dominant word size at the time
was 36 bits, and DEC started with 18 bit machines (the PDP-1 and PDP-4)
and then moved down to 12 bits (the PDP-5) and up to 36 bits (the PDP-6).

System Modules were no-longer used in the PDP-7 (18 bits) and the PDP-8
(12 bits). These were the first to use Flip Chip logic (both in the sense
of Flip Chip modules as the basic circuit board format, and in the sense of
Flip Chip hybrid ICs -- really just hybrid resistor-diode network).

The scale of integration found on the System Modules was essentially the
same as on the first generation Flip Chip boards -- each circuit board was
essentially the equivalent of an SSI integrated circuit. The difference
was that the System Modules were physically bigger, used expensive
connectors, had an aluminum frame on each module, and required the use of
a hand-wired backplane. The Flip-Chip circuit boards were about half the
area, used card-edge connectors, and could be plugged into backplanes that
were set up for semi-automatic wire-wrap interconnection.

> Anyways, if you're interested in getting these up and running, you can

> probably find replacement plugs and hook up wire ...

Fortunately, banana plugs are commonplace in most of the electronic parts
catalogs, even today.

Doug Jones
jo...@cs.uiowa.edu

Douglas W. Jones,201H MLH,3193350740,3193382879

unread,
May 12, 1998, 3:00:00 AM5/12/98
to

From article <6j9ved$45v$1...@void.agames.com>,
by alb...@agames.com (Mike Albaugh):
>
> : So what was inside the modules? Individual components, or Flip-flops/logic
> : gates/etc, or ...?
>
> I don't recall the "lab modules" being all that similar to "system
> modules". The "lab modules" that I used as a student at Berkeley, circa
> 1970, were physically as described above, but had DTL and TTL ICs on the
> boards themselves.

The DEC logic lab of the late 1960's and the early 1970's had little in
common with the original DEC logic lab from the late 1960's. The packaging
was different, the component technology was different, and even the voltage
levels were different. Only the purpose, helping people prototype or teach
digital design, was the same.

Anyone remember DEC PDP-16 RTMs? I have fond memories of building a
graphics processor based on that family of digital system building blocks.

Doug Jones
jo...@cs.uiowa.edu

Joe Keane

unread,
May 13, 1998, 3:00:00 AM5/13/98
to

In article <SCOTT.98M...@slave.doubleu.com>

Scott Hess <sc...@doubleu.com> writes:
>Why not run something like Trellis encoding over traces on the bus?

I think it's inevitable that interconnects get more sophisticated.
Techniques that are used for sending signals thousands of miles also
make sense for sending signals a few inches inside a box.

The main reason is that the compute power of chips keeps increasing,
while the characteristics of busses don't change a whole lot.

We can have adaptive skew compensation, echo cancellation, crosstalk
elimination, frequency equalization, and lots of other nifty buzzwords.

Better encoding can give you better immunity to noise and transients,
while at the same time using less power than current crude encodings.

High-bandwidth busses can still use ribbon cable, but the lines may be
generalized channels, not corresponding to specific bits like now.

Bandwidth may depend on line quality like it does now for modems and
phone lines. Hopefully it doesn't rain inside your computer.

We can have software that evaluates your busses and points out problems.
People may speed up their computer by moving a cable a half an inch.

Or we may have chips with fiber transceivers.

--
Joe Keane, amateur mathematician

0 new messages