Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

WorkS Digest V1 #45

63 views
Skip to first unread message

ucbvax!works

unread,
Dec 17, 1981, 3:54:11 PM12/17/81
to
>From JSol@RUTGERS Thu Dec 17 20:50:24 1981
Works Digest Friday, 18 Dec 1981 Volume 1 : Issue 45

Today's Topics: MC68000 Paging Schemes
64K And 256K Memory Chips - Is The US Falling Behind?
[Sorry for the delay putting out digests,
there hasn't been too much to send out -JSOL]
----------------------------------------------------------------------

Date: 16 Dec 1981 1624-PST
From: Jim McGrath <JPM at SU-AI>
Subject: 64K and 256K memory chips
Sender: CSD.MCGRATH at SU-SCORE
Reply-To: CSD.MCGRATH at SU-SCORE

I have been talking to some business folks who generally seem to have
a good feel for the chip market. They were bemoaning the fact that US
manufacturers have pretty much given the 64K market to the Japanese
(only Motorola and TI are selling those chips in any significant
quantity, and the Japanese have captured an estimated 60 to 70% of the
market). I was wondering what people in the "front lines" of chip
manufacture think about this. It appears that the US spent far too
much time and energy on trying to come up with a denser chip to
increase yeilds, resulting in practically no one coming up with a 64K
chip for the general market (as opposed to in house IBM and ATT use).
With this head start, the Japanese could head us off in the 256K
market as well, although perhaps all the time we have spent on the 64K
chip will give us a technical edge there.

This is an important issue. If US industry falls behind on
64K and 256K chips for the mass market, we will lose the
computer manufacturer market, which will decrease revenue and
then really knock us out of the R&D race. And once the memory
chips go, can logic chips be far behind? Eventually our
"cutting edge" industry will subsist on DoD handouts (since they
are a more reliable source of chips than the Japanese).

Comments?

Jim

------------------------------

Date: 14 Dec 1981 0950-PST
From: Ian H. Merritt <MERRITT at USC-ISIB>

Precisely. A real implementation is onw which works. Some are better
than others, but I refer to a real implementation as one which doesn't
limit the instruction set at all. The 2 chip solution is a royal
kluge, but it works and it doesn't have any effect on the available
set of instructions; only the effective speed.
<>IHM<>

------------------------------

Date: 14 December 1981 20:33-EST
From: Robert A. Morris <RAM at MIT-MC>
Subject: M68000 paging schemes.

I have the impression that those who have adopted the two cpu schemes,
eg. Apollo Computer, to overcome hardware deficiencies on pagefaults
have NOT restricted the instructions useful on their machines. This
would lose the benefit of the two cpu scheme. Restrictions on
instructions are adopted by those who can or wish to enforce them or
depend on "scouts honor" programming. I know vendors doing this who
expect Motorola to produce correctly working chips before they bring
any products to market and do their program development with
high-level langauge compilers which circumvent the offending
instructions. Such development presumes that the compilers can easily
be fixed to generate more effective code when the chip handles the
fault correctly and/or when the program is to run in a non-paging
environement.

--bob morris

------------------------------

Date: 17 December 1981 01:44-EST
From: Leonard N. Foner <FONER at MIT-AI>
Subject: The Real Implementation of
Subject: the MC68000 Demand Paging Algorithm

This was a project I was working on about a year ago. The scoop:

You CAN use ANY insruction in the processor with the dual-processor
system. Essentially, what happens on a page fault is that the CPU
gets a LONG memory wait for the next byte. The other processor
handles the stuff. Of course, you can't go off and run somebody else
while you're waiting for the next page to come into memory, unlike
reasonable VM machines like VAXen, but this will work.

For this reason, you probably also wanna use a stacklike cache of
pages that are bound for disk, to decrease "expensive" page faults
which have to hit the disk. (That way, you've effectively got a
larger working set, since only the page last accessed in \that/ cache
gets stuck on disk, when you've gotta bring a page in from disk on a
fault.) I'm not at all sure how much of an advantage this makes when
you can't run somebody else during the fault, though... but good
references on good paging algorithms are in the VAX Hardware Handbook.
There \must/ be theoretical treatments around on how to do it
efficiently, and that will really help in this situation.

Obviously, how fancy you wanna get on paging is a tradeoff between
price and performance, pure and simple... with the law of diminishing
returns thrown in as an extra added feature.

Have fun.

<LNF>

------------------------------

End of WorkS Digest
*******************
-------

0 new messages