Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Multics Concepts For the Contemporary Computing World

43 views
Skip to first unread message

Ken Clement

unread,
Jun 23, 2003, 5:15:02 PM6/23/03
to
Multics was a great learning experience for me personally, and a lot
of fun to boot.

It also informed a generation or two of computer scientists both
positively and negatively about how to build a secure, shared, and
robust, computing resource.

For those of you who designed or experienced Multics, I would like to
pose the following open-ended question(s):

Now, in the present age of the Internet - replete with WORMS, VIRUSES,
and SPAM how would you design a computing system employing the lessons
of Multics and including hardware, operating system, and applications
to address contemporary security and reliability issues?

This question assumes a greenfield implementation with no need to be
compatible with existing hardware or software except to be able to
implement standard interfaces and network protocols, but complete
freedom with the rest of the architecture.

Would you use segmentation as in Multics?

Would this include rings?

How would you envision the features of this architecture being used to
provide a more secure foundation to address contemporary issues?

How would the MMU function?

What would the instruction set look like?

Would the architecture be 64 bit?

Would the architecture be RISC?

Would the O.S. use a micro kernel approach or a monolithic one?

Would the architecture look different on client hosts (desktops)
versus server hosts?

What features would you implement differently from the way Multics
implemented them (education by counterexample)? Which features would
you not implement at all?

Would non-discretionary access protection have a role in this new
architecture? If so, how would it be used (perhaps differently from
the way it was in Multics)?

What language(s) would the system be coded in?

Should any effort to implement such hardware and software be an
open-source one were it actually done? (Architecture Specification in
the case of hardware, Source Code for Software)

What ideas from MACH or UNIX might also be incorporated? (Down SCO!
Down!)

How would you avoid "Second System Syndrome" especially in light of
the Multics experience?

I would be interested in any links to papers or web pages that already
discuss these issues as I believe these likely exist, but I thought an
actual discussion might be at least interesting if not seminal.

Best Regards,
Ken Clement
Multician in Exile
(aren't we all!)

Stephen H. Westin

unread,
Jun 24, 2003, 11:36:53 AM6/24/03
to
k...@clement.name (Ken Clement) writes:

> Would you use segmentation as in Multics?

Yes. The wonderful thing about this, as I understand it, is that disk
file mapping and shared memory become first-class citizens, rather
than added-on kludges. Nothing has to be built as a shared library.

> Would this include rings?

I would guess so.

> How would you envision the features of this architecture being used to
> provide a more secure foundation to address contemporary issues?
>
> How would the MMU function?
>
> What would the instruction set look like?
>
> Would the architecture be 64 bit?

Sure. 16 bits of segment number, 48 bit offset within segment. Or
maybe more segments.

> Would the architecture be RISC?

Yeah. Whatever the hardware people can do best in. But we would write
it for portability, so a new generation of hardware wouldn't be bound
to a particular ISP.

> Would the O.S. use a micro kernel approach or a monolithic one?

I think a microkernel is fundamental to security. Wasn't this stated
in the Air Force Multics security paper?

> Would the architecture look different on client hosts (desktops)
> versus server hosts?

That's the one thing that might want rethinking. Multics was designed
to share a system between many competing users, but a desktop machine
wants to share itself between different (possibly concurrent) tasks of
a single user. Unix seems to have made this transition painlessly, but
I suspect that some of the sophistication of Multics in processor
scheduling and memory allocation would be wasted (at best) or might
get in the way (at worst). The assumption in the days of Multics was
that only a small fraction (1/5?) of any given program would be
resident in main memory at a given time, and efficiency came from
surrendering the processor(s) to other tasks while waiting for page
reads. These days we generally just load up a single program and let
it crunch; multitasking is mainly to maintain system stuff (display
update, print spooling, network communication) and to let the user
occupy him/herself while waiting for results.

> What features would you implement differently from the way Multics
> implemented them (education by counterexample)? Which features would
> you not implement at all?
>
> Would non-discretionary access protection have a role in this new
> architecture? If so, how would it be used (perhaps differently from
> the way it was in Multics)?

Hmm. Sounds like a means of intellectual property protection.

> What language(s) would the system be coded in?

Doesn't matter. Probably a C derivative; perhaps Java.

> Should any effort to implement such hardware and software be an
> open-source one were it actually done? (Architecture Specification in
> the case of hardware, Source Code for Software)

It's hard to build open-source processor chips, and I suspect a good
system would want low-level hardware support. Open source could be a
means of peer review: if everyone sees the source, Trojan horses and
trap doors are less of a threat, and everyone can verify the
correctness and security of the code. But I think it needs a leader to
avoid feature bloat, inconsistency, and paralysis.

<snip>

> How would you avoid "Second System Syndrome" especially in light of
> the Multics experience?

Well, Multics was the second system for CTSS. I think this showed; the
system was long behind schedule becoming operational, and performance
was long a problem. Since we have much more experience both in
interactive systems and in massive software projects, there's a better
chance of success.

<snip>

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.

Edward Rice

unread,
Jun 24, 2003, 12:16:09 PM6/24/03
to
In article <u4r2fq...@graphics.cornell.edu>,

westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote:

> > Would the O.S. use a micro kernel approach or a monolithic one?
>
> I think a microkernel is fundamental to security. Wasn't this stated
> in the Air Force Multics security paper?

Possibly stated as a requirement, but if so, "requirement" related to the
design, programming, and verification tools of that period. I happen to
think it would still be highly desirable, but I'm not sure whether it would
still be an absolute requirement. It would require restructuring of what
we gray-hairs think of as Multics, but I never saw it (from my remove --
others may well disagree violently) as a real impediment, just a minor
irritation.

> > Would the architecture look different on client hosts (desktops)
> > versus server hosts?
>
> That's the one thing that might want rethinking. Multics was designed
> to share a system between many competing users, but a desktop machine
> wants to share itself between different (possibly concurrent) tasks of
> a single user. Unix seems to have made this transition painlessly, but
> I suspect that some of the sophistication of Multics in processor
> scheduling and memory allocation would be wasted (at best) or might
> get in the way (at worst). The assumption in the days of Multics was
> that only a small fraction (1/5?) of any given program would be
> resident in main memory at a given time, and efficiency came from
> surrendering the processor(s) to other tasks while waiting for page
> reads. These days we generally just load up a single program and let
> it crunch; multitasking is mainly to maintain system stuff (display
> update, print spooling, network communication) and to let the user
> occupy him/herself while waiting for results.

This is a really interesting issue. If we were to split server and host
functionality, where might that split occur? Or would we split on the
basis of resource demands -- if you edit a thousand-line file, you do it on
the client and the server need not be involved, but if you edit one with a
million (Billion, whatever) lines, is there a way to semi-transparently ask
the server to do it? Is the server-side really just daemons and such
functions, with background processing, with the client-side most of what we
know from ring_4?

I don't think processor or memory space is necessarily our bottleneck, but
flow of file access and control might be. In a large enough network, at
least. We are going to run the Internet with some of these new beasts,
right? Would the combined system recognize that "pl1 syslib>**.pl1" was
bogging down on the client and move the workload to the server-level
hardware, or would we have to specify "pl1 syslib>**.pl1 -server" for that
to happen?

> > Would non-discretionary access protection have a role in this new
> > architecture? If so, how would it be used (perhaps differently from
> > the way it was in Multics)?
>
> Hmm. Sounds like a means of intellectual property protection.

Something that handled that issue rationally and flexibly would be pretty
nice. Unfortunately, copying a segment to which you have "r" access and
then assigning "re" access to yourself on the copy gets past some things.
Generation-control by the system, such that an entity would be copiable but
the copy would not be, could be enforced by non-discretionary controls, but
when you're done the thing still sits on a general-purpose computing frame
and someone can fiddle it some more.

> > How would you avoid "Second System Syndrome" especially in light of
> > the Multics experience?
>
> Well, Multics was the second system for CTSS. I think this showed; the
> system was long behind schedule becoming operational, and performance
> was long a problem. Since we have much more experience both in
> interactive systems and in massive software projects, there's a better
> chance of success.

From your lips to God's ear.

ehr


Shmuel (Seymour J.) Metz

unread,
Jun 24, 2003, 1:25:56 PM6/24/03
to
In <355431eb.03062...@posting.google.com>, on 06/23/2003

at 02:15 PM, k...@clement.name (Ken Clement) said:

>For those of you who designed or experienced Multics, I would like to
>pose the following open-ended question(s):

What about those that read about it with lust in their hearts?

>This question assumes a greenfield implementation with no need to be
>compatible with existing hardware or software except to be able to
>implement standard interfaces and network protocols, but complete
>freedom with the rest of the architecture.

I'd either go with a capability based system or with a paged/segmented
system that included a sizable number of rings and lage addresses; I'd
probably want 128 bits, with 64 for segment # and the rest split
between page and offset, but almost certainly more than 64.

>Would you use segmentation as in Multics?

Unless I went with capabilities.

>Would this include rings?

At least 64; probably 256, unless I used a more general mechanism.

>How would you envision the features of this architecture being used
>to provide a more secure foundation to address contemporary issues?

I'd want to find an efficient way to allow safe cross-ring calls in
both directions. That probably would require some tinkering with the
stack mechanism. I'd want an architectural requirement that code can
only be executed from r/o segments, and I'd use an implimentation
language that included automatic array/string bounds checking.


>How would the MMU function? As with Multics, but with an added "copy on write" bit in page descriptors. Also, if I went with a capability-based architecture than it would include hardware bounds checking.

>What would the instruction set look like?

My prejudice is in favor of byte handling for sizes at least 1-64,
crossing arbitrary boundaries. I also favor truncated-address
architectures. Other than that, there are so many possibilities that a
discussion would get way off topic.

>Would the architecture be 64 bit?

At least.

>Would the O.S. use a micro kernel approach or a monolithic one?

Probably neither.

>Would the architecture look different on client hosts (desktops)
>versus server hosts?

I hope not.

>Would non-discretionary access protection have a role in this new
>architecture?

Abssolutely.

>What language(s) would the system be coded in?

My preference would be PL/I. My suspicion is fad du jour.

>Should any effort to implement such hardware and software be an
>open-source one were it actually done?

Yes.

A couple of facilities in Multics I'd like to comment on in this
context:

Dynamic linking should be fully supported; none of these DLL
half measures.

The shell should support both active functions and pipes.

There should be a stream I/O system supporting the file
system, pipes and external devices.

While the central idea of memory mapped files should be
preserved, there should be general support for extended attributes
and auxilliary segments.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited bulk E-mail will be subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail.

Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply
to spam...@library.lspace.org


Tom Van Vleck

unread,
Jun 24, 2003, 2:46:00 PM6/24/03
to
I don't have time now to do this topic justice, but here are a
few remarks, from the point of view of someone who left the
OS design team in 1981.

1. Multics segment descriptors are capabilities. They are
managed by the OS and kept in OS space, and interpreted by the
hardware. Users call the OS to create them, destroy them, etc.
The Multics rule for passing them around was that you can't save
them or pass them to another process. But they are capabilities.

2. Multics segments were too small. The max size pinched only
occasionally in the 1970s but would be intolerable now.

3. Multics processes had too few segments. I forget the limit;
it wasn't that big. Again, it was rarely a problem in the 70s
but should be addressed in a modern system.

4. The supervisor team discussed, at various times, how to do
things ideally. One idea was a 72-bit segment number, such that
every segment number would be permanent and unique. Some kind of
associative memory would have to determine whether such a number
was currently valid.

5. A major omission from Multics was the network. There should
be some kind of transparent single system image. The tricky part
would be integrating this with the other mechanisms in the system
cleanly; the question is, how does one interpret ACLs on remote
objects: e.g. is "Smith.*.*" there the same as "Smith.*.*" here?

6. Rings are an interesting subject. The problem is that
everything in ring 1 can interfere with everything else in ring
1. Probably a less restrictive lattice is needed, and an
unlimited number of ringlike domains created on demand. But
Multics rings had one great virtue, that of efficiency. Once the
per-ring structures were amortized, cross-ring calls were as
efficient as intra-ring: no shrinking of argument descriptors, no
slow special cross-ring-call opcode.

7. Discussion on the "kernel" point has missed a key aspect.
Microkernel systems such as Mach work by message passing. Multics
had a notion of a "kernel" and there was a design project to
separate ring 0 into kernel and non-kernel, and multiple projects
to move stuff out of ring 0, mostly never shipped. But these two
are not the same thing: there was never any proposal to introduce
message-passing calls into the Multics architecture. So this is
a big choice, to be made at the very beginning. Message passing
architectures like Mach's are great for structure, but there's a
heavy performance penalty you pay up front, in argument
marshaling and so on. I worked on Tandem systems, and because
they were fundamentally message passing, they were able to expand
to multiprocessors and clusters with ease.

8. I would be tempted to put in some kind of mandatory access
control, but the human interface to those features would have to
be thought out deeply. All of access control needs vast
innovation in user interface: one study showed that 90% of UNIX
systems had crucial access control settings wrong.


Peter Flass

unread,
Jun 24, 2003, 3:40:55 PM6/24/03
to
"Shmuel (Seymour J.) Metz" wrote:
>
> Dynamic linking should be fully supported; none of these DLL
> half measures.
>

DLLs (DSOs, etc) are a kludge to make up for the lack of sharable
segments. Given the architecture they have to suport them they're
fairly well done, but certainly Multics' generalized mechanism is much
better.

Christopher Browne

unread,
Jun 24, 2003, 4:39:42 PM6/24/03
to
In the last exciting episode, k...@clement.name (Ken Clement) wrote:
> Would you use segmentation as in Multics?

It wouldn't be terribly Multics-like, otherwise, no?

> Would this include rings?

Probably so.

> How would the MMU function?
>
> What would the instruction set look like?

One of the lesson that Unix taught was that the instruction set
_isn't_ everything. Wiser to keep the system a bit more portable so
that when a new architecture comes along, or an old one disappears,
the porting project isn't preposterously difficult.

> Would the architecture be 64 bit?
>
> Would the architecture be RISC?

Having 64 bits of memory space to play with would be a good thing.

Basing the system on one RISC architecture would not. The ups and
downs of MIPS, Alpha, SPARC, and PPC demonstrate that there is
considerable danger to committing to just one.

> Would the O.S. use a micro kernel approach or a monolithic one?

Look back to the security recommendations. The Air Force only
considered Multics to be secure within "benign" environments, and
recommended having a "security kernel" around it. That sure seems to
point to "microkernel," not "monolith"...

> Would the architecture look different on client hosts (desktops)
> versus server hosts?

Presumably there would be more sophisticated configurations on server
hosts, but when the smallest RAM stick you can get these days is
256MB, and they're practically giving those away, there seems little
merit to scaling it down...

> What features would you implement differently from the way Multics
> implemented them (education by counterexample)? Which features
> would you not implement at all?

I'm sure many reactions to NT and Unix could be useful...

> What language(s) would the system be coded in?

There's an excellent question.

We well know that C is pretty fragile for the purpose. PL/1 compilers
are getting pretty thin on the ground, and the folks that do PL/1 tend
to be more "business oriented" than "kernel oriented."

The best thought I have is Ada, but it's not necessarily likely to be
palatable...

> Should any effort to implement such hardware and software be an
> open-source one were it actually done? (Architecture Specification
> in the case of hardware, Source Code for Software)

I doubt any alternative would be realistic, at this point in time.
Microsoft has done such a good job of 'scorching the earth' in
removing other commercial OS competition, and there are so many "free
software" implementations of Unix that this pretty effectively
squeezes out the option of a "commercial" implementation.

> What ideas from MACH or UNIX might also be incorporated? (Down SCO!
> Down!)

Many, I am sure. More relevant, I'd think, would be to borrow some of
the emulation ideas from Hurd, which intend(s|ed) to provide multiple
personalities.

It is probably necessary to start with something that has a
"Unix-like" personality in order to draw on the relevant tool sets.

It might even be the "right idea" to start by hosting something atop
one of the free Unices, and migrate functionality into the "New-ics"
environment, with a view to eventually having it self-host.
--
let name="cbbrowne" and tld="acm.org" in name ^ "@" ^ tld;;
http://www.ntlug.org/~cbbrowne/multics.html
The *Worst* Things to Say to a Police Officer: I was trying to keep up
with traffic. Yes, I know there is no other car around - that's how
far ahead of me they are.

John W Gintell

unread,
Jun 24, 2003, 11:28:22 PM6/24/03
to
Christopher Browne wrote:
> In the last exciting episode, k...@clement.name (Ken Clement) wrote:
>
>>Would you use segmentation as in Multics?
>
>
> It wouldn't be terribly Multics-like, otherwise, no?
>
>
>>Would this include rings?
>
>
> Probably so.
>

To me the essense of Multics iss segmentation. The way that programs
refer to data was via direct addressing down to the bit in the processor
with the hardware protecting access at each access to the data. There is
no distinction between "memory" and "files".

The biggest technical limitation was segment size (1 megabyte addressed
to the word with an 18 bit address and byte or bit addresses added where
necessary. In reality this limit wasn't too big a burden for most
applications so there were few cases of multi-segment files or
databases. But these were mostly collections of text data and the limit
wasn't very serious and didn't require many kludges or work-arounds to
that limitation. But with sound and graphics and movies, etc. which is
what people store and process these days the objects are much bigger.

What would be a reasonable segment size limit in 2025? (that is sooner
than the 1965 hardware enforced limit that was still running 30 years
later.)

The second attribute of these segments is that they could be shared by
multiple processes acting on behalf of multiple users and the cost of
the making sharing work (locking, access control) was low because of the
hardware and the fact that all these processes ran in the same system
with common memory and multiplexed processors.

How should something like this scale across a network so that the same
kind of sharing would work efficiently in multiple systems? Or would it
be "OK" to not meet that requirement for a future system.

The biggest future stopping limitation was that Multics ran on hardware
designed to run it and nothing else. I'm not talking about instructions
sets which can be dealt with with recompilation, but addressing/access
control architecture.

It would be wisest to look at the above aspects of existing hardware
architecture to see how well/badly they deal with access control and
addressing from this point of view.

Capitan Mutanda

unread,
Jun 25, 2003, 4:46:16 AM6/25/03
to
westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote in message news:<u4r2fq...@graphics.cornell.edu>...

> k...@clement.name (Ken Clement) writes:
>
> > Would you use segmentation as in Multics?
>
> Yes. The wonderful thing about this, as I understand it, is that disk
> file mapping and shared memory become first-class citizens, rather
> than added-on kludges. Nothing has to be built as a shared library.

Could you please elaborate a bit on the fact that you would not need
shared libs (solaris style or m$?).

/CM

Shmuel (Seymour J.) Metz

unread,
Jun 25, 2003, 11:07:03 AM6/25/03
to
In <u4r2fq...@graphics.cornell.edu>, on 06/24/2003
at 11:36 AM, westin*nos...@graphics.cornell.edu (Stephen H. Westin)
said:

>Sure. 16 bits of segment number,

Not nearly enough, IMHO.

>The assumption in the days of Multics was
>that only a small fraction (1/5?) of any given program would be
>resident in main memory at a given time, and efficiency came from
>surrendering the processor(s) to other tasks while waiting for page
>reads. These days we generally just load up a single program and let
>it crunch;

We? I normally have multiple applications open concurrently. The idea
of paging in code when you need it applies very much to the desk top.
Further, several of the applications that I use heavily have multiple
threads.

Do you use an office suite? How much of the code do you use in any
given hour? I'd be very surprised if it was even as large a fraction
as the 1/5 that you estimate for Multics.

>and to let the user
>occupy him/herself while waiting for results.

The worst part of the web infestation is the perverse idea that the
user's time has no value and that it is legitimate to expect him to
wait for results. IMHO the user's time is more valuable than the
computer's time, especially for a computer as inexpensive as a PC.
Multitasking is one tool for freeing the user's time for more
productive use.

Stephen H. Westin

unread,
Jun 25, 2003, 1:26:18 PM6/25/03
to
capitan...@hotmail.com (Capitan Mutanda) writes:

As others have pointed out, a dll/DSO is a kludge. You must
specifically build a sharable object and link to it in a special
way. In Multics, the linker ("ld" in Unix) isn't needed: at run time,
the system loads one segment and runs it. When something from another
segment is needed, that segment is automatically loaded on the fly and
symbols resolved. So the system doesn't know, or care, whether that is
another compiled segment in the same directory or a system library.

This deals with the two purposes of DLLs: actual sharing (to avoid
wasting memory), and dynamic linking (to provide upgraded
infrastructure without relinking every program). In addition, it dealt
with shared data segments, which tend to be thought of as a whole
different animal in Unix.

Oh, and segmentation means that you don't have to have special
position-independent code or wire down a shared library to certain
virtual addresses. Even shared data can have different segment numbers
in different processes, and pointers within that segment still
work. Including a shared data segment at different locations in
different processes in Unix requires some sort of special addressing,
if there are pointers within the data.

Stephen H. Westin

unread,
Jun 25, 2003, 1:33:55 PM6/25/03
to
"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> writes:

> In <u4r2fq...@graphics.cornell.edu>, on 06/24/2003
> at 11:36 AM, westin*nos...@graphics.cornell.edu (Stephen H. Westin)
> said:
>
> >Sure. 16 bits of segment number,
>
> Not nearly enough, IMHO.
>
> >The assumption in the days of Multics was
> >that only a small fraction (1/5?) of any given program would be
> >resident in main memory at a given time, and efficiency came from
> >surrendering the processor(s) to other tasks while waiting for page
> >reads. These days we generally just load up a single program and let
> >it crunch;
>
> We? I normally have multiple applications open concurrently. The idea
> of paging in code when you need it applies very much to the desk top.
> Further, several of the applications that I use heavily have multiple
> threads.
>
> Do you use an office suite? How much of the code do you use in any
> given hour? I'd be very surprised if it was even as large a fraction
> as the 1/5 that you estimate for Multics.

But that's not the same thing as having 100 users, 20 of whom may be
vying for the CPU and memory together. The Multics goal was to
optimize response time for a number of competing users. Sure, when I
switch to the Web browser, that has to be paged in. But that's a
noticeable delay as substantially the whole thing gets paged
in. NetBeans is far worse; applications like Maya (an
animation/rendering system) are somewhere in between.

The current assumption is that the number of page reads will be
relatively small once things are really going; otherwise things slow
down. In MATLAB, for example, it's not really practical to deal with
arrays larger than the physical memory of the machine. In the
timesharing days, one of the big selling points for VM was the ability
to deal with data larger than physical memory.

> >and to let the user
> >occupy him/herself while waiting for results.
>
> The worst part of the web infestation is the perverse idea that the
> user's time has no value and that it is legitimate to expect him to
> wait for results. IMHO the user's time is more valuable than the
> computer's time, especially for a computer as inexpensive as a PC.
> Multitasking is one tool for freeing the user's time for more
> productive use.

Yeah, but how often do you achieve full CPU utilization with real
sharing between multiple applications? In my experience, Windows 2000
is pretty awful at, say, running a long computation in the background
and doing anything interactive (especially with significant memory
demands) at the same time.

Peter Flass

unread,
Jun 25, 2003, 6:13:49 PM6/25/03
to
"Stephen H. Westin" wrote:
> Yeah, but how often do you achieve full CPU utilization with real
> sharing between multiple applications? In my experience, Windows 2000
> is pretty awful at, say, running a long computation in the background
> and doing anything interactive (especially with significant memory
> demands) at the same time.
>

Any version of winblows is bad. OS/2 is somewhat better, but not
great. Add one of the brain-dead "win-printers" that use CPU time to
print a scan-line by scan-line and you've got a pentium running liks a
286.

Partly this is just a function of the scheduling and paging algorithms.
You can only do so much with the resources you have, but what do you
want to optimize for -- foreground responsiveness, overall thruput, or
something else? This is one area where you might want to differentiate
client and server. Keep the rest of the system the same and vary the
scheduler somewhat.

One thing I haven't seen mentioned is threads. I would think a modern
Multics would need threads.

Stephen H. Westin

unread,
Jun 25, 2003, 6:51:13 PM6/25/03
to
Peter Flass <peter...@yahoo.com> writes:

> "Stephen H. Westin" wrote:
> > Yeah, but how often do you achieve full CPU utilization with real
> > sharing between multiple applications? In my experience, Windows 2000
> > is pretty awful at, say, running a long computation in the background
> > and doing anything interactive (especially with significant memory
> > demands) at the same time.
> >
>
> Any version of winblows is bad. OS/2 is somewhat better, but not
> great. Add one of the brain-dead "win-printers" that use CPU time to
> print a scan-line by scan-line and you've got a pentium running liks a
> 286.

Sorry for bringing Windows into this, but it is the dominant desktop
OS, for better or for worse. I think most of what I say applies to Mac
and Linux systems, as well.

In the formative days of Multics, memory and processors were scarce,
so lots of effort went into allocating them between many competing
users. Sharing resources between different tasks of a single user is
not the same problem, and it's even more different because we can now
basically afford the memory to have a whole application resident. And
I think applications are written that way, without attention to memory
access patterns.

> Partly this is just a function of the scheduling and paging algorithms.
> You can only do so much with the resources you have, but what do you
> want to optimize for -- foreground responsiveness, overall thruput, or
> something else? This is one area where you might want to differentiate
> client and server. Keep the rest of the system the same and vary the
> scheduler somewhat.

Yes, but this needs to be redesigned from what Multics did, as both
the requirements and resources have changed. That was the point that I
apparently failed to make.

> One thing I haven't seen mentioned is threads. I would think a modern
> Multics would need threads.

--

Capitan Mutanda

unread,
Jun 26, 2003, 8:35:17 AM6/26/03
to
westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote in message news:<uvfuuo...@graphics.cornell.edu>...

> As others have pointed out, a dll/DSO is a kludge. You must
> specifically build a sharable object and link to it in a special
> way. In Multics, the linker ("ld" in Unix) isn't needed: at run time,
> the system loads one segment and runs it. When something from another
> segment is needed, that segment is automatically loaded on the fly and
> symbols resolved. So the system doesn't know, or care, whether that is
> another compiled segment in the same directory or a system library.
>
> This deals with the two purposes of DLLs: actual sharing (to avoid
> wasting memory), and dynamic linking (to provide upgraded
> infrastructure without relinking every program). In addition, it dealt
> with shared data segments, which tend to be thought of as a whole
> different animal in Unix.

Thanks for your reply! However I still don't understand how I would use segments.

Let me detail my question... In my job I have several times built applications
that could be extended without the need of recompilation or giving source code
to the users. All I would do is create a config file that would specify
for each extension an entry point. The user could write his own handlers
and from my code I would read the config file and use dlopen and dlsym
to load the extension and locate the entry point. More or less like changing
the implementation class in java.

How would I do this with segments in Multics. Or maybe I misunderstood
the initial posting

TIA

/CM

Stephen H. Westin

unread,
Jun 26, 2003, 12:24:02 PM6/26/03
to
capitan...@hotmail.com (Capitan Mutanda) writes:

> westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote in message news:<uvfuuo...@graphics.cornell.edu>...
> > As others have pointed out, a dll/DSO is a kludge. You must
> > specifically build a sharable object and link to it in a special
> > way. In Multics, the linker ("ld" in Unix) isn't needed: at run time,
> > the system loads one segment and runs it. When something from another
> > segment is needed, that segment is automatically loaded on the fly and
> > symbols resolved. So the system doesn't know, or care, whether that is
> > another compiled segment in the same directory or a system library.
> >
> > This deals with the two purposes of DLLs: actual sharing (to avoid
> > wasting memory), and dynamic linking (to provide upgraded
> > infrastructure without relinking every program). In addition, it dealt
> > with shared data segments, which tend to be thought of as a whole
> > different animal in Unix.
>
> Thanks for your reply! However I still don't understand how I would
> use segments.

Segments are just there in Multics. They are the native language. It's
kinda like asking how ELF or COFF files are used in Unix.

> Let me detail my question... In my job I have several times built
> applications that could be extended without the need of
> recompilation or giving source code to the users. All I would do is
> create a config file that would specify for each extension an entry
> point. The user could write his own handlers and from my code I
> would read the config file and use dlopen and dlsym to load the
> extension and locate the entry point. More or less like changing the
> implementation class in java.
>
> How would I do this with segments in Multics. Or maybe I misunderstood
> the initial posting

Well, you would write your extension. You would compile it into an
executable segment (just as any other source file is compiled) and put
it in some directory, either one of your own or a system directory,
and give read/execute permissions to everyone. The user would make
sure that the correct directory is included in his/her load path. Then
when the user invokes a segment that refers to any of the entry points
in your segment, and execution reaches the actual call to that entry
point, the system follows the search path until it finds a segment
containing that entry point. That segment is bound into the user's
process, the link is resolved, and things proceed normally. No
configuration files, no dlopen or dlsym. It's just a routine dynamic
link.

Shmuel (Seymour J.) Metz

unread,
Jun 26, 2003, 12:05:03 PM6/26/03
to
In <bdacud$qm3td$1...@ID-125932.news.dfncis.de>, on 06/24/2003

at 08:39 PM, Christopher Browne <cbbr...@acm.org> said:

>The best thought I have is Ada, but it's not necessarily likely to be
>palatable...

Certainly more palatable than a C-based language. I'd prefer PL/I, but
Ada isn't bad.

Shmuel (Seymour J.) Metz

unread,
Jun 26, 2003, 12:02:00 PM6/26/03
to
In <20030624144600...@multicians.org>, on 06/24/2003

at 02:46 PM, Tom Van Vleck <th...@multicians.org> said:

>1. Multics segment descriptors are capabilities.

Not as the term is used in the literature. Search for "capability
based architecture".

>5. A major omission from Multics was the network. There should be
>some kind of transparent single system image. The tricky part would
>be integrating this with the other mechanisms in the system cleanly;
>the question is, how does one interpret ACLs on remote objects: e.g.
>is "Smith.*.*" there the same as "Smith.*.*" here?

An ACL should be associated with an object, not with its name. The
only time that there would be an issue would be if you wanted to
associate an ACL with a group of objects, in which case there would be
a case for using the names. But even then, wouldn't you want a single
namespace for the entire network?

Charlie Spitzer

unread,
Jun 26, 2003, 2:05:33 PM6/26/03
to

"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote in
message news:3efb18f8$15$fuzhry+tra$mr2...@news.patriot.net...

> In <20030624144600...@multicians.org>, on 06/24/2003
> at 02:46 PM, Tom Van Vleck <th...@multicians.org> said:
>
> >1. Multics segment descriptors are capabilities.
>
> Not as the term is used in the literature. Search for "capability
> based architecture".
>
> >5. A major omission from Multics was the network. There should be
> >some kind of transparent single system image. The tricky part would
> >be integrating this with the other mechanisms in the system cleanly;
> >the question is, how does one interpret ACLs on remote objects: e.g.
> >is "Smith.*.*" there the same as "Smith.*.*" here?
>
> An ACL should be associated with an object, not with its name. The
> only time that there would be an issue would be if you wanted to
> associate an ACL with a group of objects, in which case there would be
> a case for using the names. But even then, wouldn't you want a single
> namespace for the entire network?

stratus has acls on objects, and has implemented this by having local and
remote modules. a system is made up of 1 or more local modules, and an
object has the format %system#disk>object. modules and systems are linked
with networks. a user is defined on all modules in a single system, thus it
doesn't matter which module an object actually resides on; a user on that
system can reach it and use it. you can reach across the network to an
object on another system, but before doing so, must validate that you have
access to objects on other systems by giving a password (matched against the
registration info for your userid on the other system). once validated, one
can touch anything on the other module until the process logs out. a
sysadmin can also define trusted remote systems such that validation isn't
necessary.

>
> --
> Shmuel (Seymour J.) Metz, SysProg and JOAT

regards,
charlie
stratus cac


Mel Wilson

unread,
Jun 26, 2003, 3:47:37 PM6/26/03
to
In article <3efb18f8$15$fuzhry+tra$mr2...@news.patriot.net>,

"Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> wrote:
>In <20030624144600...@multicians.org>, on 06/24/2003
> at 02:46 PM, Tom Van Vleck <th...@multicians.org> said:
>> [ ... ] , how does one interpret ACLs on remote objects: e.g.

>>is "Smith.*.*" there the same as "Smith.*.*" here?
>
> [ ... ] But even then, wouldn't you want a single

>namespace for the entire network?

At the present time, Uniform Resource Identifiers seem to
give us that.

Regards. Mel.

John W Gintell

unread,
Jun 26, 2003, 4:25:17 PM6/26/03
to
Ken Clement wrote:
> Multics was a great learning experience for me personally, and a lot
> of fun to boot.
>

I point out that the IA-32 architecture (Pentium):

-has segments that can be as large as 4GB
-supports 8192 segments
-has 6 segment address registers to access these segments
-supports "rwe" and "a" access control
-has 4 rings with gates
-the segments are mapped into a linear address space that is paged

So this processor that you can buy for a couple of hundred dollars would
support a modern implementation of Multics whose chief architectural
difference is abandonment of paged segments and replacement with paged
address space.

>
> Would you use segmentation as in Multics?
>
> Would this include rings?
>

Stephen H. Westin

unread,
Jun 26, 2003, 6:06:45 PM6/26/03
to
John W Gintell <gin...@shore.net> writes:

> Ken Clement wrote:
> > Multics was a great learning experience for me personally, and a lot
> > of fun to boot.
> >
>
> I point out that the IA-32 architecture (Pentium):
>
> -has segments that can be as large as 4GB
> -supports 8192 segments
> -has 6 segment address registers to access these segments

What does that mean? Can a program access only 6 segments without
swapping segment registers?

> -supports "rwe" and "a" access control

I don't understand "a", but then I'm not a Multician. And I'm assuming
that "rwe" are all independent bits?

> -has 4 rings with gates
> -the segments are mapped into a linear address space that is paged

Again, I don't quite understand. Must the segments be contiguous? If
not, does the space between segments cost anything?

Many thanks for explaining to someone who lives far from the hardware.

> So this processor that you can buy for a couple of hundred dollars
> would support a modern implementation of Multics whose chief
> architectural difference is abandonment of paged segments and
> replacement with paged address space.

--

Peter Flass

unread,
Jun 26, 2003, 7:23:22 PM6/26/03
to
John W Gintell wrote:
> I point out that the IA-32 architecture (Pentium):
>
> -has segments that can be as large as 4GB
> -supports 8192 segments
> -has 6 segment address registers to access these segments
> -supports "rwe" and "a" access control
> -has 4 rings with gates
> -the segments are mapped into a linear address space that is paged
>

The "rings" are fairly useless, except for ring 0 and 3. The details
escape me, but I believe the problem is something like rings 1 and 2
bypass some checking on write access. Also, there are no ring brackets
on segments. So ring 1 has access to everything in 2 and 3.

I believe this could be gotten around by defining separate LDTs for each
ring. Cross-ring checks could be made on reference and, if allowed, the
segment descriptor stored in the LDT for the referencing ring. The
problem is that this scheme doesn't allow ring brackets to be changed on
the fly. The advantage is that this would allow a reasonable number of
rings -- say eight.

Peter Flass

unread,
Jun 26, 2003, 7:34:22 PM6/26/03
to
"Stephen H. Westin" wrote:
>
> What does that mean? Can a program access only 6 segments without
> swapping segment registers?

This is true, but my understanding of Multics is that it worked this
way. The segment and base registers would be paired: CS:EIP current
code segment, DS:EDX curent part of linkage segment, SS:EBP stack
segment, leaving ES, FS, and GS free to access other segments. How much
code (basic blocks) reference six segments?


>
> > -the segments are mapped into a linear address space that is paged
>
> Again, I don't quite understand. Must the segments be contiguous? If
> not, does the space between segments cost anything?
>

You'd have to think of the 4GB linear address space (or larger, with the
possibility of larger segments) as if it were the "physical" memory of
the computer. Segments would be assigned an address in LAS and not
usually moved. They could be swapped out entirely, or relocated if they
needed to be extended. The memory manager is the only place the linear
addresses would be significant, since all references would be via the
segment tables, and each process could posssibly use a different segment
id. This limits the aggregate size of all segments to 4GB, but compared
to the 645 that's a pretty large memory.

Anne & Lynn Wheeler

unread,
Jun 26, 2003, 10:53:33 PM6/26/03
to
Peter Flass <peter...@yahoo.com> writes:
> This is true, but my understanding of Multics is that it worked this
> way. The segment and base registers would be paired: CS:EIP current
> code segment, DS:EDX curent part of linkage segment, SS:EBP stack
> segment, leaving ES, FS, and GS free to access other segments. How much
> code (basic blocks) reference six segments?

the issue in ROMP (pc/rt) and RIOS (original rs/6000) was that the
original 801 architecture provided for 16 segment registers
.... allowing up to a maximum of 16 different virtual objects to be
simultaneously mapped into the address space.

The original design point for 801 and ROMP was that a proprietary
operating system had all security and priviledge checking at compile
and load time .... so there was absolutely no priviledge checking
needed at runtime. Inline application code could as easily swap
segment register values as it could swap general register values
(without having to resort to any kind of kernel call where priviledges
were enforced).

801 used inverted tables and the total number of different possible
addressable segments was 12bits or 4096 (w/o having to implement some
invalidation process). This was exbanded to 24bits or 16million
possible addressable segments .... although they had to be mapped into
one of the 16 possible segment registers.

Porting UNIX to that environment created a problem since

1) addressing changes required priviledge validation by kernel calls
and

2) there were starting to be some unix application environments that
had multiple tens if not hundreds of memory objects simulateneously
mapped in a single address space. The porting of those application
environments to the ROMP/RIOS platforms required attempting to map
some cluster/collection of simultaneously used memory mapped objects
into a single shared library (which could in turn be mapped into the
address space using a single segment register).

The issue in ROMP/RIOS with the limitation of sixteen segment
registers was the paradigm translation of applications involving large
numbers of individual memory mapped objects into paradigm with
collections or libraries of memory mapped objects.

The proprietary operating system approach to application inline code
to frequently and quickly change addressed objects wasn't possible (in
the unix environment) and the approach of doing an extremly large
number of kernel calls for something that had been anticipated to take
a couple of instructions wasn't pratical.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm

Capitan Mutanda

unread,
Jun 27, 2003, 3:32:41 AM6/27/03
to
westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote in message news:<s0y8zo7...@diesel.graphics.cornell.edu>...
[snip]

> Well, you would write your extension. You would compile it into an
> executable segment (just as any other source file is compiled) and put
> it in some directory, either one of your own or a system directory,
> and give read/execute permissions to everyone. The user would make
> sure that the correct directory is included in his/her load path. Then
> when the user invokes a segment that refers to any of the entry points
> in your segment, and execution reaches the actual call to that entry
> point, the system follows the search path until it finds a segment
> containing that entry point. That segment is bound into the user's
> process, the link is resolved, and things proceed normally. No
> configuration files, no dlopen or dlsym. It's just a routine dynamic
> link.

Great! Now I've seen the light on Multics segments! Wonder why this
approach was never applied to modern (sic!) OSes.

One more question, security related. How does Multics enforce that certain
segments cannot be overridden by the user. Something like writing myself some
code for nethack(don't know Multics equivalent) so that the die() stuff
will automatically ressurrect my character?

/CM

John W Gintell

unread,
Jun 27, 2003, 12:10:07 PM6/27/03
to

When a procedure wants access to a data segment it usually calls
hcs_$initiate to get a segment number to it giving a full pathname
(perhaps resolved by the command from a relative pathname) and it either
finds it or doesn't. If the reference is via dynamic linking where just
a name was used then the search rules in effect are used to search a
bunch of specified set of directories. So even hcs_ (the gate to ring 0
and all the functions of the OS) could be substituted with something
else. But of course that doesn't get you into ring 0 since only the real
hcs_ can be used for that purpose; so the substitute hcs_ would have to
eventually call the real hcs_ to get anything done that only the OS can
do. The same applies to anything in a ring below that in which the
current program is running.

This is a slight simplification: Each segment has r , w, and or
e(xecute) access and a set of ring brackets (r1,r2, r3). w only works
when running in r1 or below, r in r2 or below, and e when running in r3
down to r2 means it is a gate and a ring change will occur.

> /CM


Barry Margolin

unread,
Jun 27, 2003, 1:40:49 PM6/27/03
to
In article <e5f9f063.03062...@posting.google.com>,

Capitan Mutanda <capitan...@hotmail.com> wrote:
>westin*nos...@graphics.cornell.edu (Stephen H. Westin) wrote in message
>news:<s0y8zo7...@diesel.graphics.cornell.edu>...
>[snip]
>> Well, you would write your extension. You would compile it into an
>> executable segment (just as any other source file is compiled) and put
>> it in some directory, either one of your own or a system directory,
>> and give read/execute permissions to everyone. The user would make
>> sure that the correct directory is included in his/her load path. Then
>> when the user invokes a segment that refers to any of the entry points
>> in your segment, and execution reaches the actual call to that entry
>> point, the system follows the search path until it finds a segment
>> containing that entry point. That segment is bound into the user's
>> process, the link is resolved, and things proceed normally. No
>> configuration files, no dlopen or dlsym. It's just a routine dynamic
>> link.
>
>Great! Now I've seen the light on Multics segments! Wonder why this
>approach was never applied to modern (sic!) OSes.

Unix does something similar with the LD_LIBRARY_PATH environment variable.

I think Stephen is conflating multiple concepts in his responses. The
ability to search for dynamically-linked libraries is not really dependent
on the Multics segmentation model, since Unix can do the searching as well.

The Multics segmented memory architecture makes it easy for compilers to
generate position-independent code, which is needed for dynamic linking.
When calling from one segment to another, a register is automatically
loaded with a pointer to the base of the new segment, and the code can then
address relative to that register to access other subroutines or data in
the segment. Note that it's *still* necessary to generate PIC -- a
compiler that doesn't generate code that makes use of this register would
not be usable to compile dynamically-loaded subroutines.

--
Barry Margolin, barry.m...@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Barry Margolin

unread,
Jun 27, 2003, 1:46:00 PM6/27/03
to
In article <3ef889a4$7$fuzhry+tra$mr2...@news.patriot.net>,

Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:
>In <355431eb.03062...@posting.google.com>, on 06/23/2003
> at 02:15 PM, k...@clement.name (Ken Clement) said:
>>Would this include rings?
>
>At least 64; probably 256, unless I used a more general mechanism.

I'm not sure that rings really scale to that many. If you need lots of
them, they almost certainly wouldn't fit into a nice, hierarchical scheme.
Capabilities is probably what you want in that case.

Remember, the original Multics architecture specified 64 rings. I expect
one of the reasons they dropped it down to 8 was because they couldn't
figure out a sensible way to make use of so many.

Barry Margolin

unread,
Jun 27, 2003, 1:58:27 PM6/27/03
to
In article <3EFB8336...@yahoo.com>,

Peter Flass <peter...@yahoo.com> wrote:
>"Stephen H. Westin" wrote:
>>
>> What does that mean? Can a program access only 6 segments without
>> swapping segment registers?
>
>This is true, but my understanding of Multics is that it worked this
>way. The segment and base registers would be paired: CS:EIP current
>code segment, DS:EDX curent part of linkage segment, SS:EBP stack
>segment, leaving ES, FS, and GS free to access other segments. How much
>code (basic blocks) reference six segments?

But the Multics hardware also supported Indirect-To-Segment (ITS) pointers.
A double-word memory location would contain a segment number and offset,
and you could indirect through this to access any location in your address
space. This avoids having to keep on reloading the scarce pointer
registers; these were mainly used for the "important" segments that you
described.

Does the IA-32 support indirection like this, or do you always have to load
a segment register to access another segment?

John W Gintell

unread,
Jun 27, 2003, 2:06:39 PM6/27/03
to
Barry Margolin wrote:
> In article <3ef889a4$7$fuzhry+tra$mr2...@news.patriot.net>,
> Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:
>
>>In <355431eb.03062...@posting.google.com>, on 06/23/2003
>> at 02:15 PM, k...@clement.name (Ken Clement) said:
>>
>>>Would this include rings?
>>
>>At least 64; probably 256, unless I used a more general mechanism.
>
>
> I'm not sure that rings really scale to that many. If you need lots of
> them, they almost certainly wouldn't fit into a nice, hierarchical scheme.
> Capabilities is probably what you want in that case.
>
> Remember, the original Multics architecture specified 64 rings. I expect
> one of the reasons they dropped it down to 8 was because they couldn't
> figure out a sensible way to make use of so many.
>

The 64 ring multics had no hardware support for rings, but that was
added. A Segment Descriptor Word (SDW) was 36 bits and had to contain a
page table address, the segment size, rew access bits, perhaps a couple
of others and the ring brackets - 9 bits was all that could be spared
for three ring numbers.

John W Gintell

unread,
Jun 27, 2003, 2:21:57 PM6/27/03
to
Barry Margolin wrote:
> In article <3EFB8336...@yahoo.com>, Peter Flass
> <peter...@yahoo.com> wrote:
>
>> "Stephen H. Westin" wrote:
>>
>>> What does that mean? Can a program access only 6 segments without
>>> swapping segment registers?
>>
>> This is true, but my understanding of Multics is that it worked
>> this way. The segment and base registers would be paired: CS:EIP
>> current code segment, DS:EDX curent part of linkage segment, SS:EBP
>> stack segment, leaving ES, FS, and GS free to access other
>> segments. How much code (basic blocks) reference six segments?
>
>
> But the Multics hardware also supported Indirect-To-Segment (ITS)
> pointers. A double-word memory location would contain a segment
> number and offset, and you could indirect through this to access any
> location in your address space. This avoids having to keep on
> reloading the scarce pointer registers; these were mainly used for
> the "important" segments that you described.
>
> Does the IA-32 support indirection like this, or do you always have
> to load a segment register to access another segment?
>

According to my memory and a quick glance at one of the manuals, I think
it does not support indirect addressing through a far pointer
and you must load a segment register to access it. It is possible this
loading is used also to invalidate parts of the TLB (translation
lookaside buffer) that is used to cacbe address translation and avoid
page table lookup.

I also don't think this has changed for the IA-64.

To my recollection Multics didn't use this feature very much with the
exception of to make dynamic linking work by generating a linkage fault
when attempting to address something external.

John Ahlstrom

unread,
Jun 27, 2003, 3:07:47 PM6/27/03
to

In alt.os.multics
Tom Van Vleck wrote:
>
> I don't have time now to do this topic justice, but here are a
> few remarks, from the point of view of someone who left the
> OS design team in 1981.
>
--snip snip
>
> 7. Discussion on the "kernel" point has missed a key aspect.
> Microkernel systems such as Mach work by message passing. Multics
> had a notion of a "kernel" and there was a design project to
> separate ring 0 into kernel and non-kernel, and multiple projects
> to move stuff out of ring 0, mostly never shipped. But these two
> are not the same thing: there was never any proposal to introduce
> message-passing calls into the Multics architecture. So this is
> a big choice, to be made at the very beginning. Message passing
> architectures like Mach's are great for structure, but there's a
> heavy performance penalty you pay up front, in argument
> marshaling and so on. I worked on Tandem systems, and because
> they were fundamentally message passing, they were able to expand
> to multiprocessors and clusters with ease.
-snip snip

What about architectural support for message passing?
IIRC the GEC 4080 had such support.
From: http://www.cucumber.demon.co.uk/geccl/4000series/4080sales.html

> 4000 NUCLEUS TIMES
> (microseconds, typical)
[JKA: machine had 550nsec memory cycle]
> Semaphore operations
> no program change - 4.95
> program change - 35
> Segment load
> - 7.5
> Inter-chapter branch
> no segment change - 4.6
> segment change - 9.1
> Start input/output
> - 20.0
> Interrupt
> no program change- 8.7
> program change - 42
> Inter-process message
> no program change - 35
> program change - 55
>


Does the 4080 have any successors?
Any similar support in other architectures?
Can such support change the performance penalty enough to
make message passing cost-effective?

--
I don't think average programmers would get along very well
with languages that force them to think about their design
decisions before they plunge into coding.
Brian Inglis

Barry Margolin

unread,
Jun 27, 2003, 3:11:40 PM6/27/03
to
In article <3EFC8B49...@shore.net>,
John W Gintell <gin...@shore.net> wrote:

>Barry Margolin wrote:
> > But the Multics hardware also supported Indirect-To-Segment (ITS)
> > pointers.
...

>To my recollection Multics didn't use this feature very much with the
>exception of to make dynamic linking work by generating a linkage fault
>when attempting to address something external.

Weren't PL/I pointer variables usually ITS pointers?

Stephen Fuld

unread,
Jun 27, 2003, 3:40:50 PM6/27/03
to

"John Ahlstrom" <jahl...@cisco.com> wrote in message
news:3EFC9603...@cisco.com...

>
> In alt.os.multics
> Tom Van Vleck wrote:
> >
> > I don't have time now to do this topic justice, but here are a
> > few remarks, from the point of view of someone who left the
> > OS design team in 1981.
> >
> --snip snip
> >
> > 7. Discussion on the "kernel" point has missed a key aspect.
> > Microkernel systems such as Mach work by message passing. Multics
> > had a notion of a "kernel" and there was a design project to
> > separate ring 0 into kernel and non-kernel, and multiple projects
> > to move stuff out of ring 0, mostly never shipped. But these two
> > are not the same thing: there was never any proposal to introduce
> > message-passing calls into the Multics architecture. So this is
> > a big choice, to be made at the very beginning. Message passing
> > architectures like Mach's are great for structure, but there's a
> > heavy performance penalty you pay up front, in argument
> > marshaling and so on. I worked on Tandem systems, and because
> > they were fundamentally message passing, they were able to expand
> > to multiprocessors and clusters with ease.
> -snip snip
>
> What about architectural support for message passing?

Didn't the Elixi "mini-super" computer have such support?

--
- Stephen Fuld
e-mail address disguised to prevent spam


Robert S. Coren

unread,
Jun 27, 2003, 3:55:11 PM6/27/03
to
In article <MF0La.43$Cd7...@paloalto-snr1.gtei.net>,

Barry Margolin <barry.m...@level3.com> wrote:
>In article <3EFC8B49...@shore.net>,
>John W Gintell <gin...@shore.net> wrote:
>>Barry Margolin wrote:
>> > But the Multics hardware also supported Indirect-To-Segment (ITS)
>> > pointers.
>...
>>To my recollection Multics didn't use this feature very much with the
>>exception of to make dynamic linking work by generating a linkage fault
>>when attempting to address something external.
>
>Weren't PL/I pointer variables usually ITS pointers?

For unpacked pointers, s/usually/invariably/, IIRC.
--
---Robert Coren (co...@panix.com)------------------------------------
Aw, well... I guess some of us talks too much, anyway.
--Rackety Coon Chile (Walt Kelly)

Barry Margolin

unread,
Jun 27, 2003, 4:39:27 PM6/27/03
to
In article <bdi7ev$9m8$1...@panix5.panix.com>,

Robert S. Coren <co...@panix.com> wrote:
>In article <MF0La.43$Cd7...@paloalto-snr1.gtei.net>,
>Barry Margolin <barry.m...@level3.com> wrote:
>>In article <3EFC8B49...@shore.net>,
>>John W Gintell <gin...@shore.net> wrote:
>>>Barry Margolin wrote:
>>> > But the Multics hardware also supported Indirect-To-Segment (ITS)
>>> > pointers.
>>...
>>>To my recollection Multics didn't use this feature very much with the
>>>exception of to make dynamic linking work by generating a linkage fault
>>>when attempting to address something external.
>>
>>Weren't PL/I pointer variables usually ITS pointers?
>
>For unpacked pointers, s/usually/invariably/, IIRC.

That's what I thought. And IIRC, unpacked pointers were the norm (packed
pointers were mainly used when there was a need to put a pointer into the
same field that might also hold a fixed bin(35), or there was some other
reason you needed to save a word of memory). So ITS pointers were used
extremely frequently.

Christopher Browne

unread,
Jun 27, 2003, 4:53:08 PM6/27/03
to
After takin a swig o' Arrakan spice grog, "Shmuel (Seymour J.) Metz" <spam...@library.lspace.org.invalid> belched out...:

>>Would this include rings?
>
> At least 64; probably 256, unless I used a more general mechanism.

Would it be _truly_ useful to have so many rings?

That implies that you have 64 or 256 "application layers."

I'm not sure I can fathom an application that could need such a fine
layering of security capabilities.
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://cbbrowne.com/info/linuxxian.html
If at first you don't succeed, try duct tape. If duct tape doesn't
work, give up.

John W Gintell

unread,
Jun 27, 2003, 4:57:49 PM6/27/03
to
Barry Margolin wrote:
> In article <bdi7ev$9m8$1...@panix5.panix.com>,
> Robert S. Coren <co...@panix.com> wrote:
>
>>In article <MF0La.43$Cd7...@paloalto-snr1.gtei.net>,
>>Barry Margolin <barry.m...@level3.com> wrote:
>>
>>>In article <3EFC8B49...@shore.net>,
>>>John W Gintell <gin...@shore.net> wrote:
>>>
>>>>Barry Margolin wrote:
>>>>
>>>>>But the Multics hardware also supported Indirect-To-Segment (ITS)
>>>>>pointers.
>>>>
>>>...
>>>
>>>>To my recollection Multics didn't use this feature very much with the
>>>>exception of to make dynamic linking work by generating a linkage fault
>>>>when attempting to address something external.
>>>
>>>Weren't PL/I pointer variables usually ITS pointers?
>>
>>For unpacked pointers, s/usually/invariably/, IIRC.
>
>
> That's what I thought. And IIRC, unpacked pointers were the norm (packed
> pointers were mainly used when there was a need to put a pointer into the
> same field that might also hold a fixed bin(35), or there was some other
> reason you needed to save a word of memory). So ITS pointers were used
> extremely frequently.
>

I thought that the PL/I compiler usually generated code to load the
pointer which was stored as an ITS pair into a pointer register (LPRn
instruction) and then the referencing code was via the pointer register
in instructions with the indirect via pointer register flag turned on.
For that form of addressing the offset is only 15 bits since three of
the 18 address bits are used to specify the pointer register number so
it wouldn't work for all structures.

Disclaimer - my memory of this is fuzzy and I've only taken recent quick
glances at AL39 (and the Intel manuals).

Tom Van Vleck

unread,
Jun 27, 2003, 5:22:30 PM6/27/03
to
On Fri, 27 Jun 2003 20:57:49 GMT
John W Gintell <gin...@shore.net> wrote:

> I thought that the PL/I compiler usually generated code to load
> the pointer which was stored as an ITS pair into a pointer
> register (LPRn instruction) and then the referencing code was
> via the pointer register in instructions with the indirect via
> pointer register flag turned on. For that form of addressing
> the offset is only 15 bits since three of the 18 address bits
> are used to specify the pointer register number so it wouldn't
> work for all structures.
>
> Disclaimer - my memory of this is fuzzy and I've only taken
> recent quick glances at AL39 (and the Intel manuals).
>

The PL/I compiler generated instructions like

epp3 sp|fred,*

to load an ITS pointer from offset "fred" in the stack frame
(ie relative to register sp) into pointer register 3.
"Effective pointer to pointer register 3." The ITS pointer
could have a full 18 bit word offset. References to the
pointer could then be done such as

call6 3|0

which would call the entrypoint pointed to by register 3 (plus a
0 word offset). I think that the limitation on offsets was one
reason why most pointer valuess were loaded into a pointer
register before use.


Tom Van Vleck

unread,
Jun 27, 2003, 5:26:00 PM6/27/03
to
On 27 Jun 2003 20:53:08 GMT
Christopher Browne <cbbr...@acm.org> wrote:

> After takin a swig o' Arrakan spice grog, "Shmuel (Seymour J.)
> Metz" <spam...@library.lspace.org.invalid> belched out...:
> >>Would this include rings?
> >
> > At least 64; probably 256, unless I used a more general
> > mechanism.
>
> Would it be _truly_ useful to have so many rings?
>
> That implies that you have 64 or 256 "application layers."
>
> I'm not sure I can fathom an application that could need such a
> fine layering of security capabilities.

Look up PSOS, the Provable Secure Operating System, developed at
SRI (by Multicians) in the 1970s. It had a 15-layer
architecture, as I remember, with each layer strictly modeled in
terms of Parnas-style O and V functions. And it looked sort of
like Multics except for a few features that were hard to model:
one was disk quota. Elegant work.

Peter Flass

unread,
Jun 27, 2003, 6:23:34 PM6/27/03
to
Barry Margolin wrote:
>
> But the Multics hardware also supported Indirect-To-Segment (ITS) pointers.
> A double-word memory location would contain a segment number and offset,
> and you could indirect through this to access any location in your address
> space. This avoids having to keep on reloading the scarce pointer
> registers; these were mainly used for the "important" segments that you
> described.
>
> Does the IA-32 support indirection like this, or do you always have to load
> a segment register to access another segment?
>
Unfortunately, the latter.

Christopher Browne

unread,
Jun 27, 2003, 6:52:16 PM6/27/03
to

OK, so 15 layers could be useful. Only 49 to go...

I'm not merely being facetious (though I _am_ being that :-)); one of
the problems with security systems is that it gets fabulously
complicated to configure the system to actually be secure if you have
too many "knobs" to play with.

Windows NT is a good example of this; it seems to have quite a
sophisticated ACL system, which is theoretically nice, but which is
quite futile to use in practice, because you have to go off and decide
how to set literally thousands of ACL objects in order to apply it to
a real system. In effect, in order to configure _one box_, you might
need months of a security specialist's time.

Again, I'm being a bit facetious, but only a bit.

From what I can see, the typical usage of ACLs has been an almost
unmitigated disaster because so many need to be set in order to apply
a security policy.

I don't think I'm stretching analogy too far to suggest that adding a
huge number of rings might lead to a similar need to configure
enormous numbers of controls.
--
let name="cbbrowne" and tld="acm.org" in name ^ "@" ^ tld;;
http://www3.sympatico.ca/cbbrowne/x.html
If vegetarians eat vegetables, what do humanitarians eat?

Charlie Spitzer

unread,
Jun 27, 2003, 7:05:04 PM6/27/03
to

"Christopher Browne" <cbbr...@acm.org> wrote in message
news:bdihr0$t7jrv$2...@ID-125932.news.dfncis.de...

with acl propagation from superior objects, and a capability to set an acl
as a default for lower objects, it isn't that hard to get right.


Charles Shannon Hendrix

unread,
Jun 28, 2003, 1:22:42 AM6/28/03
to
In article <un0g5p...@graphics.cornell.edu>, Stephen H. Westin wrote:

> Sorry for bringing Windows into this, but it is the dominant desktop
> OS, for better or for worse. I think most of what I say applies to Mac
> and Linux systems, as well.

The Linux 2.6 kernel is supposed to address quite a few of the problems
brought up in this thread.

It's definintely true that the current system is bad when operating
under a load, and being responsive. It has great throughput, but when
sharing a system or running a lot of desktop applications (which has
patterns similar to timesharing) its bad, or when you need good
response on headless servers even.

A number of large changes in the 2.6 kernel address these and other
issues.

We'll see how well they do.


--
Ah... you gotta love it when your ISP switches to a SPAMMING newsfeed.
Sigh...

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 12:15:24 AM6/29/03
to
In <3EFA1EDC...@yahoo.com>, on 06/25/2003
at 10:13 PM, Peter Flass <peter...@yahoo.com> said:

>One thing I haven't seen mentioned is threads. I would think a
>modern Multics would need threads.

No. A modern Multics with expensive process creation would need
threads. If you can make it inexpensive to create and destroy
processes then you don't need threads.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited bulk E-mail will be subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail.

Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply
to spam...@library.lspace.org


Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 12:15:59 AM6/29/03
to
In <3EFB56AF...@shore.net>, on 06/26/2003

at 08:25 PM, John W Gintell <gin...@shore.net> said:

>I point out that the IA-32 architecture (Pentium):

>-has segments that can be as large as 4GB

Adequate for the original Multics; not adequate for an ab initio
system inspired by Multics.

>-supports 8192 segments

Far too few.

>-has 6 segment address registers to access these segments

Limiting. Especially when neither the 80486 nor the Pentium had a TLB
for segment descriptors.

>-has 4 rings with gates

Barely enough for the old Multics; certainly not enough for a modern
replacement.

>-the segments are mapped into a linear address space that is paged

Limiting the aggregate size of all active segments to 4GB. That linear
address space is the worst single feature of the architecure.

Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 12:17:21 AM6/29/03
to
In <uy8zoi...@graphics.cornell.edu>, on 06/26/2003
at 06:06 PM, westin*nos...@graphics.cornell.edu (Stephen H. Westin)
said:

>What does that mean? Can a program access only 6 segments without
>swapping segment registers?

Correct. And that swap was *very* expensive in the 80486 and Pentium.

>Again, I don't quite understand. Must the segments be contiguous?

No. But the aggregate length of all the segments must be LE 4 GiB, and
you'd have to periodically compact them to even get 4 GiB.

Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 12:12:18 AM6/29/03
to
In <ur85io...@graphics.cornell.edu>, on 06/25/2003
at 01:33 PM, westin*nos...@graphics.cornell.edu (Stephen H. Westin)
said:

>Sure, when I
>switch to the Web browser, that has to be paged in. But that's a
>noticeable delay as substantially the whole thing gets paged in.

Then something is broken in your OS; there is no reason to page in moe
than the pages that are actually referred to.

>In MATLAB, for example, it's not really practical to deal with
>arrays larger than the physical memory of the machine.

I don't know how MATHLAB does things, but lots of other programs
handle large arrays with no problem. The trick is to lay out your data
in a fashion that takes locality of reference into account.

>Yeah, but how often do you achieve full CPU utilization with real
>sharing between multiple applications?

Does it matter? I'm concerned with saving *my* time.

>In my experience, Windows 2000
>is pretty awful at, say, running a long computation in the
>background and doing anything interactive (especially with
>significant memory demands) at the same time.

It's pretty awful, period. It's no basis for judging what has been
done, much less what can be done. Certainly OS/2 doesn't have the
problems you describe.

Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 11:47:59 AM6/29/03
to
In <7B%Ka.32$Cd7...@paloalto-snr1.gtei.net>, on 06/27/2003

at 05:58 PM, Barry Margolin <barry.m...@level3.com> said:

>Does the IA-32 support indirection like this,

No.

>or do you always have to load
>a segment register to access another segment?

Except for calls, but I don't know whether any current system uses
that mechanism.

Shmuel (Seymour J.) Metz

unread,
Jun 29, 2003, 11:42:55 AM6/29/03
to
In <sp%Ka.27$Cd7...@paloalto-snr1.gtei.net>, on 06/27/2003

at 05:46 PM, Barry Margolin <barry.m...@level3.com> said:

>Remember, the original Multics architecture specified 64 rings. I
>expect one of the reasons they dropped it down to 8 was because they
>couldn't figure out a sensible way to make use of so many.

I suspect that it was simply a cost saving measure. Architects tend to
underestimate how large a filed needs to be to accommodate future
demands. Remember the infamous 640KB.

Andi Kleen

unread,
Jun 29, 2003, 7:46:47 PM6/29/03
to
Barry Margolin <barry.m...@level3.com> writes:

> Does the IA-32 support indirection like this, or do you always have to load
> a segment register to access another segment?

IA-32 supports it for jumps ("far jumps"), but not data access.

-Andi

John W Gintell

unread,
Jun 29, 2003, 8:48:21 PM6/29/03
to
Shmuel (Seymour J.) Metz wrote:
> In <sp%Ka.27$Cd7...@paloalto-snr1.gtei.net>, on 06/27/2003
> at 05:46 PM, Barry Margolin <barry.m...@level3.com> said:
>
>
>>Remember, the original Multics architecture specified 64 rings. I
>>expect one of the reasons they dropped it down to 8 was because they
>>couldn't figure out a sensible way to make use of so many.
>
>
> I suspect that it was simply a cost saving measure. Architects tend to
> underestimate how large a filed needs to be to accommodate future
> demands. Remember the infamous 640KB.
>

In Multics we tried very hard to have few limits. This decision was
based on the fact that we wanted the SDW to be one word only and there
weren't enough bits for more rings.

And as you've seen in in other discussions in this group people have had
a hard time finding use for many rings.

John W Gintell

unread,
Jun 29, 2003, 8:57:03 PM6/29/03
to
Shmuel (Seymour J.) Metz wrote:
> In <3EFB56AF...@shore.net>, on 06/26/2003
> at 08:25 PM, John W Gintell <gin...@shore.net> said:
>
>
>>I point out that the IA-32 architecture (Pentium):
>
>>-supports 8192 segments
>
>
> Far too few.
>
I'm curious why you think this limit is too small.

For procedures? I suppose one could envision a system where each
separately compiled procedure was in a separate segment which would give
maximal flexibility for replacing individual procedures with relinking.
But that is a huge number and would imply lots of dynamic linking which
is expensive (that's why we created the binder to prelink together
related programs.

For data where each segment might have different access rights for
different sets of people? A data base system with each record or
relation or some such entity in a separate segment? Is such a model
reasonable?

Christopher Browne

unread,
Jun 29, 2003, 11:44:59 PM6/29/03
to
Oops! John W Gintell <gin...@shore.net> was seen spray-painting on a wall:

> And as you've seen in in other discussions in this group people have
> had a hard time finding use for many rings.

I find it interesting that there has been so little discussion of what
extra rings would get used for. I challenged (hoping for good
responses) the notion that having much more than 8 would be useful.

The only comment that addressed that was one indicating that someone
once designed a "provably secure" operating system that used 16 rings.
I'll buy that this could be used to argue that some number >16 could
be useful.

It's also possible that the world isn't ready for "provably secure"
systems, and that a mere 8 rings could be compellingly better than the
systems currently available that typically barely have two to work
with...
--
output = reverse("ac.notelrac.teneerf" "@" "454aa")
http://www3.sympatico.ca/cbbrowne/linuxxian.html
"Programming is an unnatural act." -- Alan Perlis

David Spencer

unread,
Jun 30, 2003, 5:38:58 AM6/30/03
to
On Mon, 30 Jun 2003 00:57:03 GMT, John W Gintell <gin...@shore.net>
wrote:

>I'm curious why you think this limit is too small.
>
(snip)


>
>For data where each segment might have different access rights for
>different sets of people? A data base system with each record or
>relation or some such entity in a separate segment? Is such a model
>reasonable?

Yes, yes and yes (respectively). Ingres and its cousins put each
relation in a separate filesystem file. This has loads of advantages:

- one layer of filesystem management, not two. (hence many other
DBs using ghastly workrounds like "direct I/O" and raw partitions)
- access controls enforced by the OS at the relation level
- can use OS backup, recovery, defrag (etc) tools at the relation level
- free space recycled back to the OS, not just to the DB

So why do Oracle, Sybase, Informix, SQL Sewer all use the single huge
file approach? Portability, habit, hubris...

--
David Spencer
Romford, Essex
Recovered until financial depletion

John W Gintell

unread,
Jun 30, 2003, 11:32:48 AM6/30/03
to
David Spencer wrote:
> On Mon, 30 Jun 2003 00:57:03 GMT, John W Gintell <gin...@shore.net>
> wrote:
>
>
>> I'm curious why you think this limit is too small.
>>
>
> (snip)
>
>> For data where each segment might have different access rights for
>> different sets of people? A data base system with each record or
>> relation or some such entity in a separate segment? Is such a model
>> reasonable?
>
>
> Yes, yes and yes (respectively). Ingres and its cousins put each
> relation in a separate filesystem file. This has loads of
> advantages:
>
> - one layer of filesystem management, not two. (hence many other DBs
> using ghastly workrounds like "direct I/O" and raw partitions) -
> access controls enforced by the OS at the relation level - can use OS
> backup, recovery, defrag (etc) tools at the relation level - free
> space recycled back to the OS, not just to the DB


Good points here.

Concerning the upper limit question:
What do you think would be the number of relations in a huge database?
100, 1000, 10,000, more?

Even with a collosal number of relations it would seem unlikely that
they all need to have segment numbers assigned to a process at the same
time.


> So why do Oracle, Sybase, Informix, SQL Sewer all use the single huge
> file approach? Portability, habit, hubris...
>

I'd imagine the portability a serious issue that brings out the lowest
common denominator. I consulted on a patent infringement suit against a
company that had a huge system whose clients and servers ran on Windows
NT/95, MacOS, Unix/Linux, Novell and there were huge challenges on how
to use operating system features. As it is, the system was written in C
and the number of #IFDEFs for platform-specific code was horrendous and
made much of the source code unreadable.

Alan T. Bowler

unread,
Jun 30, 2003, 12:21:34 PM6/30/03
to
John W Gintell wrote:
>
> And as you've seen in in other discussions in this group people have had
> a hard time finding use for many rings.

Which is why Couleur went with capabilities when he designed
the NSA architecture which he expected to be the next stage
for the Multics hardware.

Morven Gentleman once commented something like "Multicians have
spent years saying that security should be structured with rings
like an onion, but had not stopped to observe that onions usually
have multiple centres."

J Ahlstrom

unread,
Jun 30, 2003, 12:38:28 PM6/30/03
to
Alan T. Bowler wrote:
> John W Gintell wrote:
>
>>And as you've seen in in other discussions in this group people have had
>>a hard time finding use for many rings.
>
>
> Which is why Couleur went with capabilities when he designed
> the NSA architecture which he expected to be the next stage
> for the Multics hardware.
>
-- sent

Are there any accessible descriptions of NSA
or the machines that implement it?

JKA

Shmuel (Seymour J.) Metz

unread,
Jun 30, 2003, 10:51:59 AM6/30/03
to
In <3EFF8AE8...@shore.net>, on 06/30/2003

at 12:57 AM, John W Gintell <gin...@shore.net> said:

>I'm curious why you think this limit is too small.

Because segments are the mechanism for hooking into the file system,
and these days accessing thousands of distinct files is not unusual.

Barry Margolin

unread,
Jun 30, 2003, 2:04:27 PM6/30/03
to
In article <3efe67dc$4$fuzhry+tra$mr2...@news.patriot.net>,

Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:
>In <3EFA1EDC...@yahoo.com>, on 06/25/2003
> at 10:13 PM, Peter Flass <peter...@yahoo.com> said:
>
>>One thing I haven't seen mentioned is threads. I would think a
>>modern Multics would need threads.
>
>No. A modern Multics with expensive process creation would need
>threads. If you can make it inexpensive to create and destroy
>processes then you don't need threads.

You'd also need an API that allows ordinary users to create processes; on
Multics, process creation is a privileged operation.

David Spencer

unread,
Jun 30, 2003, 2:04:16 PM6/30/03
to
On Mon, 30 Jun 2003 15:32:48 GMT, John W Gintell <gin...@shore.net>
wrote:

>Concerning the upper limit question:


>What do you think would be the number of relations in a huge database?
>100, 1000, 10,000, more?

Oddly enough I was thinking myself that it might still be a struggle to
get as far as 8192 in a database! A ghastly sprawling messy schema
might have five hundred relations *but* you need indexes too, perhaps
averaging three per relation. Then there's data partitioning (eg,
certain core relations might get a new data segment allocated every
month) and also explicit disk striping (because DBAs don't understand
RAID). Maybe throw in a test schema and a reference schema too. So on
a bad day we might *just* be in hailing distance of 8192 filesystem
objects active in the same back end process at the same time (but
struggling to *bust* 8192 until some luser tries something
pathological).

>Even with a collosal number of relations it would seem unlikely that
>they all need to have segment numbers assigned to a process at the same
>time.

Assuming a single monolithic back end server process, the "right thing"
is to keep each relation (and index) open from first reference until
flushed at a journalling consistency point. Having multiple back end
processes doesn't really change that; all the back ends need to be able
to see all the relations (with read consistency).

Maybe a multithreaded webserver process would be a better candidate for
needing 10000+ filesystem objects open at the same time?

>I'd imagine the portability a serious issue that brings out the lowest
>common denominator.

And bloody linear address space is top of the charge sheet.

>I consulted on a patent infringement suit against a
>company that had a huge system whose clients and servers ran on Windows
>NT/95, MacOS, Unix/Linux, Novell and there were huge challenges on how
>to use operating system features. As it is, the system was written in C
>and the number of #IFDEFs for platform-specific code was horrendous and
>made much of the source code unreadable.

I was going to rant (the C preprocessor being a wellspring of evil in
itself), but #IFDEF obfuscation as a shield against lawyerage does have
its attractions. Anyway, I'll never understand why these kids who call
themselves software engineers don't/won't/can't modularise in such
circumstances. Oh no, lets just do another #ifdef instead, soooo much
more efficient. Tsk. Time for another St Johns Wort methinks...

Barry Margolin

unread,
Jun 30, 2003, 2:16:09 PM6/30/03
to
In article <3f004e8f$13$fuzhry+tra$mr2...@news.patriot.net>,

Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid> wrote:
>In <3EFF8AE8...@shore.net>, on 06/30/2003
> at 12:57 AM, John W Gintell <gin...@shore.net> said:
>
>>I'm curious why you think this limit is too small.
>
>Because segments are the mechanism for hooking into the file system,
>and these days accessing thousands of distinct files is not unusual.

Is it really common to have thousands of files open concurrently in the
same application process?

We have a relational database that we use to manage our customer and
infrastructure. It has hundreds of relations (especially since we went on
a normalization spree a couple of years ago), but it's unusual for any
particular application to use more than a few dozen of them. I'd expect
the database server to open and close files on an as-needed basis, perhaps
caching open file handles in an LRU fashion to avoid having to reopen
commonly used relations.

Alan T. Bowler

unread,
Jun 30, 2003, 3:01:22 PM6/30/03
to
J Ahlstrom wrote:

> Are there any accessible descriptions of NSA
> or the machines that implement it?

No place online that I know of. You could order the manual
67 A2 RG95 REV05A, DPS 9000 Assembly Instructions
from Bull, or better, the CD with all the manuals
67 X4 42CD Gcos 8 documentation.

Christopher Browne

unread,
Jun 30, 2003, 3:53:18 PM6/30/03
to
The world rejoiced as David Spencer <dent...@yahoo.co.uk> wrote:
> On Mon, 30 Jun 2003 15:32:48 GMT, John W Gintell <gin...@shore.net>
> wrote:
>
>>Concerning the upper limit question:
>>What do you think would be the number of relations in a huge database?
>>100, 1000, 10,000, more?
>
> Oddly enough I was thinking myself that it might still be a struggle to
> get as far as 8192 in a database! A ghastly sprawling messy schema
> might have five hundred relations *but* you need indexes too, perhaps
> averaging three per relation. Then there's data partitioning (eg,
> certain core relations might get a new data segment allocated every
> month) and also explicit disk striping (because DBAs don't understand
> RAID). Maybe throw in a test schema and a reference schema too. So on
> a bad day we might *just* be in hailing distance of 8192 filesystem
> objects active in the same back end process at the same time (but
> struggling to *bust* 8192 until some luser tries something
> pathological).

SAP R/3 has somewhere around 2000 tables, which probably qualifies as
a "ghastly sprawling messy schema" :-).

Host a test system and you might multiply that a few ways, although it
would probably make sense to, at that point, have a separate DB back
end that might give you another 8192...

>>Even with a collosal number of relations it would seem unlikely that
>>they all need to have segment numbers assigned to a process at the
>>same time.

> Assuming a single monolithic back end server process, the "right
> thing" is to keep each relation (and index) open from first
> reference until flushed at a journalling consistency point. Having
> multiple back end processes doesn't really change that; all the back
> ends need to be able to see all the relations (with read
> consistency).

Sounds like what PostgreSQL does...

> Maybe a multithreaded webserver process would be a better candidate
> for needing 10000+ filesystem objects open at the same time?

Perhaps. Although I'd think that having an efficient database a ring
or two away would diminish the need for that :-).
--
select 'cbbrowne' || '@' || 'acm.org';
http://cbbrowne.com/info/lsf.html
"Starting a project in C/C++ is a premature optimization."
-- Peter Jensen

Peter Flass

unread,
Jun 30, 2003, 5:56:53 PM6/30/03
to

Actually, it's not as bad as it seems. 8192 is the limit on the number
of segments a single process could reference, there is no system-wide
limit. When a segment is initiated it would be assigned a
process-specific segment number and the descriptor stored at the
appropriate place in the LDT. A second process initiating the same
segment could use any segment number it's not aready using.

Re my comment on threads -- perhaps a modern Multics wouldn't need
threads. The function of a thread is to allow the address space to be
shared between severall separately-schedulable entities. With
segmentation all the necessary parts of the address space could be
shared among multiple processes.

Barry Margolin

unread,
Jun 30, 2003, 6:21:58 PM6/30/03
to
In article <3F00B1BB...@yahoo.com>,

Peter Flass <peter...@yahoo.com> wrote:
>Re my comment on threads -- perhaps a modern Multics wouldn't need
>threads. The function of a thread is to allow the address space to be
>shared between severall separately-schedulable entities. With
>segmentation all the necessary parts of the address space could be
>shared among multiple processes.

Most other systems have ways of sharing memory between processes, but
threads are still useful. With shared memory you have to replace all uses
of pointers with offsets or array indices, which isn't as convenient.

Tom Van Vleck

unread,
Jun 30, 2003, 6:39:10 PM6/30/03
to
Barry Margolin <barry.m...@level3.com> wrote:
> Most other systems have ways of sharing memory between
> processes, but threads are still useful. With shared memory
> you have to replace all uses of pointers with offsets or array
> indices, which isn't as convenient.

Multics had threads. There were several user-ring thread
packages. Max Smith did one (bg command), Frankston did one, Doug Wells did one, Gary Palter did one.

Russell Williams

unread,
Jun 30, 2003, 7:01:45 PM6/30/03
to
Stephen Fuld" <s.f...@PleaseRemove.att.net> wrote in message
news:651La.24517$3o3.1...@bgtnsc05-news.ops.worldnet.att.net...
>
> "John Ahlstrom" <jahl...@cisco.com> wrote in message
> news:3EFC9603...@cisco.com...
> >
> > In alt.os.multics
> > Tom Van Vleck wrote:
> > What about architectural support for message passing?
>
> Didn't the Elixi "mini-super" computer have such support?

Elxsi implemented message passing in hardware and microcode, and
used the control of link (port in Mach terms) ownership as the
fundamental system security mechanism, (along with the fact that
only the memory manager process had the hardware page tables
in its virtual address space). I/O completions showed up as messages
from controllers. It was basically a multi-server (Gnu Hurd-like)
system. Message passing was about at least an order of magnitude
slower than a function call (more if you sent much data by value).

1-2 orders of magnitude is well within the bounds where reasonable
partitioning of the OS would make the cost of message passing
insignificant. (On the other hand, a couple of bad partitioning
decisions were made that made those costs painful; refactoring had
to occur). The benefit was that we got excellent scaling from 1-12
processors, including the first (AFAIK) observations of
super-linear speedup (because adding processors added cache).

The machine was strange by today's standard in other ways: 64 bit
registers / integers, but only 32 bit virtual addresses. Cobol screamed
because you could do decimal in registers. The first
fast implementations of full IEEE SP/DP floating.

The hardware based messages and multi-server structure made for
some strange effects: on a machine with lots of RAM, you could be
using the source debugger on the memory manager while other users
continued their work without pause. We had a Unix server that
accepted "system call" messages from Posix processes (again, a good
partitioning got us lots of parallelism by farming out work to other
servers without too much time spent in message passing).

A technically interesting and successful design, but both its technical and
marketing niches were closed by the advance of the killer micros. Our
big competition was high-end VAXes, at a time when VAX software was
already entrenched, and the market for that class of hardware was being
supplanted by RISC workstations.

Russell Williams
not speaking for Adobe Systems


Stephen Fuld

unread,
Jun 30, 2003, 11:32:10 PM6/30/03
to

"Russell Williams" <williams...@adobe.com> wrote in message
news:tj3Ma.2789$Ry3.1...@monger.newsread.com...

> Stephen Fuld" <s.f...@PleaseRemove.att.net> wrote in message
> news:651La.24517$3o3.1...@bgtnsc05-news.ops.worldnet.att.net...
> >
> > "John Ahlstrom" <jahl...@cisco.com> wrote in message
> > news:3EFC9603...@cisco.com...
> > >
> > > In alt.os.multics
> > > Tom Van Vleck wrote:
> > > What about architectural support for message passing?
> >
> > Didn't the Elixi "mini-super" computer have such support?
>
> Elxsi implemented message passing in hardware and microcode,

Rest of very good explanation snipped

Thanks, I am glad I remembered correctly and your explanation of both the
technical and business issues was well done. I remember that it used huge
boards with ECL circuitry and big fans and thus was unsuitable for what we
were looking for at the time, but I remember being impressed with the
thought that went into its designs.

So, the obvious question is then, is there something that makes sense from
that idea to adapt into current microprocessor designs in order to give the
advantages of low cost message passing, and ease the development of more
modular software that would use it?

--
- Stephen Fuld
e-mail address disguised to prevent spam


Dennis Ritchie

unread,
Jul 1, 2003, 12:09:00 AM7/1/03
to

"Alan T. Bowler" <atbo...@thinkage.ca> wrote in message news:3F00638E...@thinkage.ca...
[...]

> Morven Gentleman once commented something like "Multicians have
> spent years saying that security should be structured with rings
> like an onion, but had not stopped to observe that onions usually
> have multiple centres."

Doug McIlroy once wondered, complementarily, why "rings"
were called that, when they were so obviously a 1-dimensional
structure.

Dennis


Cliff Sojourner

unread,
Jul 1, 2003, 12:21:24 AM7/1/03
to
> So, the obvious question is then, is there something that makes sense from
> that idea to adapt into current microprocessor designs in order to give
the
> advantages of low cost message passing, and ease the development of more
> modular software that would use it?

if it were easy to get the benefits of message passing OS then it would have
happened a long time ago.

programming a Tandem, for example, requires a very different mindset than
programming any *NIX system. by "programming" I mean "doing it properly".

also, as was pointed out earlier in this thread, not all applications can or
should pay the huge cost of message passing for the relatively minor gains
of scalability, atomicity, fault tolerance, manageability, reliability, etc.

but you're on the right track - how can we make message passing systems
attractive to "regular" applications?

tough question!


Sander Vesik

unread,
Jul 1, 2003, 8:37:06 AM7/1/03
to

Couledn't you add something like that onto a "conventional" processor?

--
Sander

+++ Out of cheese error +++

Peter da Silva

unread,
Jul 1, 2003, 1:38:36 PM7/1/03
to
In article <8%7Ma.4154$Xm3.1087@sccrnsc02>,

Cliff Sojourner <c...@employees.org> wrote:
> if it were easy to get the benefits of message passing OS then it would have
> happened a long time ago.

If the message passing is cheap enough (significantly less than system
call overhead in a "traditional" OS) then the message-passing system
can be faster than the traditional one. The problem with message
passing systems isn't the message passing overhead, it's that you have
to do a lot of work trying to avoid any service becoming a bottleneck.
Even on the Amiga where it was four instructions to put a message on a
queue the bottlenecks in the file system became a problem.

In a monolithic UNIX kernel this kind of thing comes for free, since
each system call automagically gets its own process context to handle
the whole operation from start to finish you never ended up blocked on
a read because some server somewhere was blocked on someone else's
request.

But now that I've mentioned the Amiga, I have to say that it did happen
a long time ago. There's unfortunately non-technical reasons why one
system or another becomes dominant or fails (for example, getting used
as a pawn in a war between Jack Tramiel and his former employers didn't
do the Amiga any good).

> but you're on the right track - how can we make message passing systems
> attractive to "regular" applications?

Message passing systems are a natural for GUI applications, and may
turn out still to be what they need. God knows there needs to be SOME
kind of fundamental paradigm shift in that environment.

--
#!/usr/bin/perl
$/="%\n";chomp(@_=<>);print$_[rand$.]

Peter da Silva, just another Perl poseur.

Geoff Lane

unread,
Jul 1, 2003, 1:49:33 PM7/1/03
to
In alt.folklore.computers Peter da Silva <pe...@abbnm.com> wrote:
> If the message passing is cheap enough (significantly less than system
> call overhead in a "traditional" OS) then the message-passing system
> can be faster than the traditional one.

Message passing also has another advantage - it defines interfaces that
cannot be subverted. Monolithic kernels allow poor programmers to bypass
defined interfaces in the interests of "effiency"

--
Geoff Lane

Barry Margolin

unread,
Jul 1, 2003, 1:57:05 PM7/1/03
to
In article <3f01c9ad$0$56600$bed6...@pubnews.gradwell.net>,

On the other hand, it also traps you into using those interfaces. If you
don't get the design right, it can be difficult to work around it. Ideally
this shouldn't be a problem, but in a practical sense it often is.

Tom Van Vleck

unread,
Jul 1, 2003, 2:54:04 PM7/1/03
to
"Rupert Pigott" wrote:
> I don't see why this should be so. In a NUMA system or a
> message passing system for a message to get from CPU A to
> CPU B it will still have to travel along a very similar
> signal path. So it can't be the plumbing that slows it down
>
> If you are talking about a locally delivered message then
> perhaps it could be slower, simply because you are eating
> bandwidth to make a copy (and pranging the cache to boot).
>
> The trick seems to be to make messages cheap in the
> hardware, this has been done many many times.

One cost of message based systems is making copies of
things. To make a message passing call, one has to at
minimum determine the size of the arguments, allocate a
message object, marshal the arguments into it, queue and
dequeue the message, and free the message object. If the
calling site and the called site do not share memory, than
additional copying and buffering is necessary. The storage
for the copies is either preallocated and mostly idle, or
is allocated and freed from a pool of storage, at the cost
of additional complexity; in either case it adds to memory
pressure.

Another cost is synchronization. Each allocation, freeing,
queueing, or dequeueing operation needs atomicity; whether
hidden in the hardware or done explicitly in software, this
synchronization requires some cost even if there is never a
conflict that causes one thread to delay.

My experience with message passing systems is that they
start out by penalized a factor of about two compared to
direct call systems, and that by employing many clever
strategies, can make up about half the deficit after years
of improvement. Sometimes the elegance, uniformity, and
protection provided by the message passing design is worth
it.

Stephen Fuld

unread,
Jul 1, 2003, 3:01:34 PM7/1/03
to

"Sander Vesik" <san...@haldjas.folklore.ee> wrote in message
news:10570630...@haldjas.folklore.ee...

> In comp.arch Stephen Fuld <s.f...@pleaseremove.att.net> wrote:

snip

> > So, the obvious question is then, is there something that makes sense
from
> > that idea to adapt into current microprocessor designs in order to give
the
> > advantages of low cost message passing, and ease the development of more
> > modular software that would use it?
> >
>
> Couledn't you add something like that onto a "conventional" processor?

I think you essentially restated part of my question. In the Elixi, Russel
pointed out that it needed both hardware and microcode. Now microcode is
passe on most current "conventional" processors, so you have to figure
something else out. In order to cross domains, you probably have to do some
fiddling with page tables or something. You want to avoid the overhead of a
full system call if possible. ISTM that there are some issues here to
resolve that may make it not worth while. Hence my question, and the second
part, which is, assuming that you had cheap message passing, what would it
take for much software to take advantage of it?

Peter da Silva

unread,
Jul 1, 2003, 2:57:40 PM7/1/03
to
In article <3f01c9ad$0$56600$bed6...@pubnews.gradwell.net>,
Geoff Lane <zza...@buffy.sighup.org.uk> wrote:

Not that any OS has ever moved a component into the kernel to do the same
thing. :)

Peter da Silva

unread,
Jul 1, 2003, 2:59:44 PM7/1/03
to
In article <RXjMa.11$5P2...@paloalto-snr1.gtei.net>,

Barry Margolin <barry.m...@level3.com> wrote:
> In article <3f01c9ad$0$56600$bed6...@pubnews.gradwell.net>,
> Geoff Lane <zza...@buffy.sighup.org.uk> wrote:
> >In alt.folklore.computers Peter da Silva <pe...@abbnm.com> wrote:
> >> If the message passing is cheap enough (significantly less than system
> >> call overhead in a "traditional" OS) then the message-passing system
> >> can be faster than the traditional one.

> >Message passing also has another advantage - it defines interfaces that
> >cannot be subverted. Monolithic kernels allow poor programmers to bypass
> >defined interfaces in the interests of "effiency"

> On the other hand, it also traps you into using those interfaces. If you
> don't get the design right, it can be difficult to work around it. Ideally
> this shouldn't be a problem, but in a practical sense it often is.

No more than any other formalised interface does. If you need to redesign
to get rid of a poorly chosen interface, then it's probably best to be faced
with it up front than to have a new interface grow organically as components
start bypassing it.

Peter da Silva

unread,
Jul 1, 2003, 3:15:55 PM7/1/03
to
In article <thvv-7F26CE.1...@news.comcast.giganews.com>,

Tom Van Vleck <th...@multicians.org> wrote:
> One cost of message based systems is making copies of
> things.

You can use techniques similar to the ones used to cut down or even
eliminate copies in network stacks. All objects, all objects over a certain
size, or all objects designated as "fast copy" are mapped rather than
copied... and may even be allocated out of a shared memory area to cut
down on the amount of page table rearrangement needed. You just need
to agree that the sending component doesn't access the object after
it's sent.

> My experience with message passing systems is that they
> start out by penalized a factor of about two compared to
> direct call systems, and that by employing many clever
> strategies, can make up about half the deficit after years
> of improvement.

My experience is with one particular where message passing was only a few
times slower than a subroutine call. Also all messages were queued, so
rather than making a system call (which meant a context switch), and
then another, and another, a program sends multiple messages and only
then enters a wait and you hit the context switch.

This is similar to what X11 does in bundling multiple operations in one
message, but it applies to all the concurrent operations performed by
one component... so after initialization (which tends to be serialised)
it may be making more "system calls" but only a fraction of them actually
involve a context switch.

It ran into serialization problems, mostly due to components that didn't
keep multiple messages in flight but instead ran each to completion before
attending the next.

Rupert Pigott

unread,
Jul 1, 2003, 5:31:25 PM7/1/03
to

"Peter da Silva" <pe...@abbnm.com> wrote in message
news:bdsln0$292k$6...@jeeves.eng.abbnm.com...

Or you could do what many kludgers do : Add another interface and
botch the internals to fit.

Cheers,
Rupert


Rupert Pigott

unread,
Jul 1, 2003, 5:32:36 PM7/1/03
to

"Peter da Silva" <pe...@abbnm.com> wrote in message
news:bdslj4$292k$5...@jeeves.eng.abbnm.com...

> In article <3f01c9ad$0$56600$bed6...@pubnews.gradwell.net>,
> Geoff Lane <zza...@buffy.sighup.org.uk> wrote:
> > In alt.folklore.computers Peter da Silva <pe...@abbnm.com> wrote:
> > > If the message passing is cheap enough (significantly less than system
> > > call overhead in a "traditional" OS) then the message-passing system
> > > can be faster than the traditional one.
>
> > Message passing also has another advantage - it defines interfaces that
> > cannot be subverted. Monolithic kernels allow poor programmers to
bypass
> > defined interfaces in the interests of "effiency"
>
> Not that any OS has ever moved a component into the kernel to do the same
> thing. :)

God forbid that you put NFS servers, HTTP servers, and GUIs into
kernel ! That would be lunacy ! Who would do such a thing ? :)

Cheers,
Rupert


Chris Hedley

unread,
Jul 1, 2003, 5:42:40 PM7/1/03
to
According to Rupert Pigott <r...@dark-try-removing-this-boong.demon.co.uk>:

> > Not that any OS has ever moved a component into the kernel to do the same
> > thing. :)
>
> God forbid that you put NFS servers, HTTP servers, and GUIs into
> kernel ! That would be lunacy ! Who would do such a thing ? :)

Some people could jump to the conclusion that MVT's memory scheme
is still state of the art...

Chris.
--
"If the world was an orange it would be like much too small, y'know?" Neil, '84
Currently playing: random early '80s radio stuff
http://www.chrishedley.com - assorted stuff, inc my genealogy. Gan canny!

Sander Vesik

unread,
Jul 1, 2003, 6:10:37 PM7/1/03
to
In comp.arch Stephen Fuld <s.f...@pleaseremove.att.net> wrote:
>
> "Sander Vesik" <san...@haldjas.folklore.ee> wrote in message
> news:10570630...@haldjas.folklore.ee...
>> In comp.arch Stephen Fuld <s.f...@pleaseremove.att.net> wrote:
>
> snip
>
>> > So, the obvious question is then, is there something that makes sense
> from
>> > that idea to adapt into current microprocessor designs in order to give
> the
>> > advantages of low cost message passing, and ease the development of more
>> > modular software that would use it?
>> >
>>
>> Couledn't you add something like that onto a "conventional" processor?
>
> I think you essentially restated part of my question. In the Elixi, Russel
> pointed out that it needed both hardware and microcode. Now microcode is

yes - in a shorter (and i'm afraid, infinititely worse spelled) version. By the
time I reached the end I had forgotten all about text before the description.

> passe on most current "conventional" processors, so you have to figure
> something else out. In order to cross domains, you probably have to do some
> fiddling with page tables or something. You want to avoid the overhead of a

Instead of microcode, one might use a special operating mode / exception level
and support instructions. such a mode could use alternate regs, have access to
data using more than one asid and so on. with some input checking in hardware
it could be both fast and RISCy.

> full system call if possible. ISTM that there are some issues here to
> resolve that may make it not worth while. Hence my question, and the second
> part, which is, assuming that you had cheap message passing, what would it
> take for much software to take advantage of it?
>

Hmmm... dependning on how ingrained their present message passing interfaces
and implementations are, mach or some of the newer microkernels might be portable
to such? Couldn't you as a first step eliminate some of their present inefficency
and then extend to achieve more performance?

Stephen Fuld

unread,
Jul 2, 2003, 1:01:43 AM7/2/03
to

"Sander Vesik" <san...@haldjas.folklore.ee> wrote in message
news:10570974...@haldjas.folklore.ee...

> In comp.arch Stephen Fuld <s.f...@pleaseremove.att.net> wrote:

snip

> Instead of microcode, one might use a special operating mode / exception


level
> and support instructions. such a mode could use alternate regs, have
access to
> data using more than one asid and so on. with some input checking in
hardware
> it could be both fast and RISCy.

Yes, I think you could use something like that. I guess I was looking for a
variety of potential solutions with some analysis of what fits the best, is
most efficient, is easiers to use, etc. You have indeed provided the
outline for one such method, Would the lower numbered rings (but still >0)
be sufficient, of do we need another mode?

> > full system call if possible. ISTM that there are some issues here to
> > resolve that may make it not worth while. Hence my question, and the
second
> > part, which is, assuming that you had cheap message passing, what would
it
> > take for much software to take advantage of it?
> >
>
> Hmmm... dependning on how ingrained their present message passing
interfaces
> and implementations are, mach or some of the newer microkernels might be
portable
> to such? Couldn't you as a first step eliminate some of their present
inefficency
> and then extend to achieve more performance?

I think so. And you could have a compatibility "trap" routine that took
what are now kernel calls and turned them into the appropriate messages
passed. Eventually, code could migrate toward the native interfaces for
increased performance and perhaps functionality.

Pete Fenelon

unread,
Jul 2, 2003, 5:10:29 AM7/2/03
to
In alt.folklore.computers Rupert Pigott <r...@dark-try-removing-this-boong.demon.co.uk> wrote:
> God forbid that you put NFS servers, HTTP servers, and GUIs into
> kernel ! That would be lunacy ! Who would do such a thing ? :)
>

Thinking of no open-source OS in particular.... the script kiddies who
hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
optional ;)

I don't think I've seen an in-kernel GUI on any Unix system since
Whitechapel MG1s, but I'm sure someone could prove me wrong ;)

pete
--
pe...@fenelon.com "there's no room for enigmas in built-up areas" HMHB

Morten Reistad

unread,
Jul 2, 2003, 6:41:24 AM7/2/03
to
In article <vg58c5l...@corp.supernews.com>,

Pete Fenelon <pe...@fenelon.com> wrote:
>In alt.folklore.computers Rupert Pigott <r...@dark-try-removing-this-boong.demon.co.uk> wrote:
>> God forbid that you put NFS servers, HTTP servers, and GUIs into
>> kernel ! That would be lunacy ! Who would do such a thing ? :)
>>
>
>Thinking of no open-source OS in particular.... the script kiddies who
>hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
>optional ;)

The Linux people have the nfs server still in user mode last I saw.
The BSD has had the nfs server tightly connected to the rest of the fs
code, and even if it is a separate process, it still is executing
kernel code in a high privilige level.

<rant>
Why do the file systems have to be so tightly integrated in the "ring0"
core? This is one subsystem that screams for standard callouts and
"ring1" level.
</rant off>


>I don't think I've seen an in-kernel GUI on any Unix system since
>Whitechapel MG1s, but I'm sure someone could prove me wrong ;)

GUI's, no; unless you count the fancy tty screen drivers.

Pete Fenelon

unread,
Jul 2, 2003, 8:07:08 AM7/2/03
to
In alt.folklore.computers Morten Reistad <m...@reistad.priv.no> wrote:
> In article <vg58c5l...@corp.supernews.com>,
> Pete Fenelon <pe...@fenelon.com> wrote:
>>In alt.folklore.computers Rupert Pigott <r...@dark-try-removing-this-boong.demon.co.uk> wrote:
>>> God forbid that you put NFS servers, HTTP servers, and GUIs into
>>> kernel ! That would be lunacy ! Who would do such a thing ? :)
>>>
>>
>>Thinking of no open-source OS in particular.... the script kiddies who
>>hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
>>optional ;)
>
> The Linux people have the nfs server still in user mode last I saw.
> The BSD has had the nfs server tightly connected to the rest of the fs
> code, and even if it is a separate process, it still is executing
> kernel code in a high privilige level.


AFAIR, Acting as an NFS server under Linux doesn't need kernel
support (but can use optional kernel-side support). Acting as an
NFS client requires kernel support (to wire NFS into the supported
set of filesystems.)

>
> <rant>
> Why do the file systems have to be so tightly integrated in the "ring0"
> core? This is one subsystem that screams for standard callouts and
> "ring1" level.
> </rant off>

Agreed.

>
>
>>I don't think I've seen an in-kernel GUI on any Unix system since
>>Whitechapel MG1s, but I'm sure someone could prove me wrong ;)
>
> GUI's, no; unless you count the fancy tty screen drivers.

--

Holger Veit

unread,
Jul 2, 2003, 8:28:28 AM7/2/03
to
Pete Fenelon <pe...@fenelon.com> wrote:
> In alt.folklore.computers Morten Reistad <m...@reistad.priv.no> wrote:
[...]

>
> AFAIR, Acting as an NFS server under Linux doesn't need kernel
> support (but can use optional kernel-side support). Acting as an
> NFS client requires kernel support (to wire NFS into the supported
> set of filesystems.)
>
>>
>> <rant>
>> Why do the file systems have to be so tightly integrated in the "ring0"
>> core? This is one subsystem that screams for standard callouts and
>> "ring1" level.
>> </rant off>
>
> Agreed.

Seconded. Problem is meanwhile that the old VAX days are gone; those
that introduced the several privilege rings which the 386 copied rather
precisely. With the exception of OS/2 and older WinNT, it seems no
modern OS has actually used the feature of multiple privilege levels
at all, with the common distinction of "supervisor" (or "kernel") and
"user" modes. Earlier (like the 68000) and later processors (PPC, MIPS, etc.)
only have those two levels at all. I.e. the knowledge of layered privileges
seems to be gone and lost - it is no just and "everything" or "nothing"
difference which makes such systems rather vulnerable. Ring 1 for file
systems or hicore parts of drivers is appropriate - but then, as M$
demonstrated by destroying the rather clean concept of WinNT, there are
performance issues due to lousy application code that "forces" the OS
writers to circumvent such clean ring callouts by throwing the whole
garbage into ring 0. Or as in Linux, there is no explicit driver API
at all (like NT's HAL or OS/2's DevHlp); every driver can mess up everything
else in kernel mode without being prevented. Ideas like microkernels have
been beaten to death by Mach crap that didn't build up the kernel from
ground up, but by the nonsense attempt to strip a monolithic kernel and
move parts into user mode without first defining where the ring border
is supposed to be. Needless to say, the result was catastrophic.

Holger

Barry Margolin

unread,
Jul 2, 2003, 11:02:13 AM7/2/03
to
In article <kscudb.krn1.ln@acer>, Morten Reistad <m...@reistad.priv.no> wrote:
>In article <vg58c5l...@corp.supernews.com>,
>Pete Fenelon <pe...@fenelon.com> wrote:
>>In alt.folklore.computers Rupert Pigott
><r...@dark-try-removing-this-boong.demon.co.uk> wrote:
>>> God forbid that you put NFS servers, HTTP servers, and GUIs into
>>> kernel ! That would be lunacy ! Who would do such a thing ? :)
>>>
>>
>>Thinking of no open-source OS in particular.... the script kiddies who
>>hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
>>optional ;)
>
>The Linux people have the nfs server still in user mode last I saw.

And I frequently hear complaints about how poor Linux's NFS support is.
Coincidence?

Unfortunately, the design of NFS practically screams for kernel
implementation. Most file system APIs implement an
open/do-lots-of-operations/close model of file access. NFS doesn't have
open or close operations, each request identifies the file using an opaque
"handle", and the file handle maps most naturally into Unix's inode model;
when implementing NFS servers on other operating systems, it's often
necessary to design kludges to support its file handles. Since the
standard API only deals with accessing files by name, not inode, it's
necessary to put the server in the kernel to get past the name requirement
(user-mode servers typically have to have the same kinds of kludges as
non-Unix implementations, or you need to add system calls that allow
by-inode access).

Shmuel (Seymour J.) Metz

unread,
Jul 2, 2003, 10:22:32 AM7/2/03
to
In <g8vsdb...@teabag.cbhnet>, on 07/01/2003

at 10:42 PM, c...@ieya.co.REMOVE_THIS.uk (Chris Hedley) said:

>Some people could jump to the conclusion that MVT's memory scheme is
>still state of the art...

Even MVT had storage protection; Supervisor didn't automatically give
you key 0. So moving graphics into the kernel on an IA-32 is even
worse than what MVT had. A company that would do a thing like that
would be capable of anything, even allowing users to include code in
e-mail that the company's software would automatically execute on
receipt of the e-mail. We all know that no one could be that stupid.

--
Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited bulk E-mail will be subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail.

Reply to domain Patriot dot net user shmuel+news to contact me. Do not reply
to spam...@library.lspace.org


Peter Ibbotson

unread,
Jul 2, 2003, 11:52:49 AM7/2/03
to
"Pete Fenelon" <pe...@fenelon.com> wrote in message
news:vg58c5l...@corp.supernews.com...

> In alt.folklore.computers Rupert Pigott
<r...@dark-try-removing-this-boong.demon.co.uk> wrote:
> > God forbid that you put NFS servers, HTTP servers, and GUIs into
> > kernel ! That would be lunacy ! Who would do such a thing ? :)
> >
>
> Thinking of no open-source OS in particular.... the script kiddies who
> hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
> optional ;)
>
> I don't think I've seen an in-kernel GUI on any Unix system since
> Whitechapel MG1s, but I'm sure someone could prove me wrong ;)


I learned C on the MG1, I don't remember the GUI as being in kernel, but
then at the time I'm not sure I'd have spotted the distinction. I always
liked the idea of a seperate mouse co-processor, and all windows having
their contents stored in a raster rather than having to repaint when they
overlapped. Are there technical documents on the web anywhere?

--
Work pet...@lakeview.co.uk.plugh.org | remove magic word .org to reply
Home pe...@ibbotson.co.uk.plugh.org | I own the domain but theres no MX


Thomas

unread,
Jul 2, 2003, 3:45:30 PM7/2/03
to
Morten Reistad wrote:

> The Linux people have the nfs server still in user mode last I saw.

Debian offers a choice: user or kernel mode NFS server.

User mode has a real nfsd running handling requests.

Kernel model also starts an nfsd, that in turn starts kernel threads, and are
part of the kernel. Just like scheduler and interrupt handlers are part of the
kernel.


Thomas

Eric Lee Green

unread,
Jul 2, 2003, 4:40:50 PM7/2/03
to
Morten Reistad wrote:
> In article <vg58c5l...@corp.supernews.com>,
> Pete Fenelon <pe...@fenelon.com> wrote:
>>In alt.folklore.computers Rupert Pigott
>><r...@dark-try-removing-this-boong.demon.co.uk> wrote:
>>> God forbid that you put NFS servers, HTTP servers, and GUIs into
>>> kernel ! That would be lunacy ! Who would do such a thing ? :)
>>>
>>
>>Thinking of no open-source OS in particular.... the script kiddies who
>>hack the Linux kernel have managed 2 out of 3 ;) Fortunately they're
>>optional ;)
>
> The Linux people have the nfs server still in user mode last I saw.

Which apparently was over two years ago. The standard NFS server in Linux has
been the kernel one since the release of the Linux 2.4 operating system
kernel. The user mode NFS server is still available, but is unsupported and
only implements NFS V2, whereas the kernel mode NFS server implements NFS V3.
The NFS V4 reference implementation for Linux is also a kernel-mode server.

> <rant>
> Why do the file systems have to be so tightly integrated in the "ring0"
> core? This is one subsystem that screams for standard callouts and
> "ring1" level.
> </rant off>

Performance. Device drivers reading straight into filesystem buffers is
difficult to achieve in userland. You end up having to modify your VM to be
able to lock pages in memory, then have to go through the overhead of managing
said locking before you do I/O. Much easier to just have both operating in
kernal-land using the normal kernel memory page allocation mechanisms of the
OS in question.

That said, on my 2.4Ghz laptop, I could do reads through LUFS (Linux Userland
FileSystem) at 100MByte/sec, much faster than the hard drive can spin, so
performance is not as big an issue as it was "back in the day". I was using
over 20% of the CPU to do the reads, though, as vs. under 5% of the CPU for
the native kernel-mode filesystem. The main reason for the high CPU usage was
all the data copying between usermode and kernel-land needed. For example,
read() turns into: kernel call, VFS layer (may have cached data), pop back up
to userland, kernel call to device driver, copy result back up to userland,
copy result back down to kernel-land, pop back up to the user with a copy of
the data. Ouch. I can think of ways to speed this up, but none that will make
it as fast as the tightly-integrated kernel-land implementation.

> GUI's, no; unless you count the fancy tty screen drivers.

Hmm, the Linux 'fbcon' screen drivers ALMOST count as a native GUI, so I guess
the Linux geeks have achieved 2.5 out of 3 on the scale of atrocities :-).

--
Eric Lee Green mailto:er...@badtux.org
Unix/Linux/Storage Software Engineer needs job --
see http://badtux.org for resume


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Chris Hedley

unread,
Jul 2, 2003, 4:38:01 PM7/2/03
to
According to Shmuel (Seymour J.) Metz <spam...@library.lspace.org.invalid>:

> Even MVT had storage protection; Supervisor didn't automatically give
> you key 0.

I had a feeling that my comparison was probably unfair to MVT!

> So moving graphics into the kernel on an IA-32 is even
> worse than what MVT had. A company that would do a thing like that
> would be capable of anything, even allowing users to include code in
> e-mail that the company's software would automatically execute on
> receipt of the e-mail. We all know that no one could be that stupid.

No, it'll never happen. :/ (I could rant on and on, but I think
it's already been done by people who're better at it than me!)

Linus Torvalds

unread,
Jul 2, 2003, 4:46:45 PM7/2/03
to
In article <kscudb.krn1.ln@acer>, Morten Reistad <m...@reistad.priv.no> wrote:
>
>The Linux people have the nfs server still in user mode last I saw.

Nope. That was five years ago. Nobody uses the user-space server for
serious NFS serving any more, even though it _is_ useful for
experimenting with user-space filesystems (ie "ftp filesystem" or
"source control filesystem").

><rant>
>Why do the file systems have to be so tightly integrated in the "ring0"
>core? This is one subsystem that screams for standard callouts and
>"ring1" level.
></rant off>

Because only naive people think you can do it efficiently any other way.

Face it, microkernels and message passing on that level died a long time
ago, and that's a GOOD THING.

Most of the serious processing happens outside the filesystem (ie the
VFS layer keeps track of name caches, stat caches, content caches etc),
and all of those data structures are totally filesystem-independent (in
a well-designed system) and are used heavily by things like memory
management. Think mmap - the content caches are exposed to user space
etc. But that's not the only thing - the name cache is used extensively
to allow people to see where their data comes from (think "pwd", but on
steroids), and none of this is anything that the low-level filesystem
should ever care about.

At the same time, all those (ring0 - core) filesystem data structures
HAVE TO BE MADE AVAILABLE to the low-level filesystem for any kind of
efficient processing. If you think we're going to copy file contents
around, you're just crazy. In other words, the filesystem has to be
able to directly access the name cache, and the content caches. Which in
turn means that it has to be ring0 (core) too.

If you don't care about performance, you can add call-outs and copy-in
and copy-out etc crap. I'm telling you that you would be crazy to do it,
but judging from some of the people in academic OS research, you
wouldn't be alone in your own delusional world of crap.

Sorry to burst your bubble.

Linus

Tom Van Vleck

unread,
Jul 2, 2003, 5:38:30 PM7/2/03
to
Eric Lee Green wrote:

> Morten Reistad wrote:
> > <rant>
> > Why do the file systems have to be so tightly integrated in
> > the "ring0" core? This is one subsystem that screams for
> > standard callouts and"ring1" level.
> > </rant off>
>
> Performance. Device drivers reading straight into filesystem
> buffers is difficult to achieve in userland. You end up having
> to modify your VM to be able to lock pages in memory, then have
> to go through the overhead of managing said locking before you
> do I/O. Much easier to just have both operating in kernal-land
> using the normal kernel memory page allocation mechanisms of
> the OS in question.

Multics had a facility called the I/O interfacer, ioi_.
Its purpose was to allow the user to safely write a device
driver that ran in the user ring of a process, obtaining
page-locked I/O buffers and wiring and unwiring them efficiently.
The various tape DIMs and the printer DIM used ioi_.

One of the major efficiencies of this scheme was, again, that we
could avoid making multiple extra copies of the data. This saved
us complicated alloc, free, and snynchronization operations and
the related memory pressure on every record.

It worked great. As I remember, when the printer DIM was changed
to use ioi_, its load on the system decreased by more than half,
and we got to remove a bunch of device specific code from ring 0.

Charlie Gibbs

unread,
Jul 2, 2003, 5:19:31 PM7/2/03
to
In article <bduv4l$p08$1$8300...@news.demon.co.uk>
spa...@ibbotson.demon.co.uk (Peter Ibbotson) writes:

>I learned C on the MG1, I don't remember the GUI as being in kernel,
>but then at the time I'm not sure I'd have spotted the distinction.
>I always liked the idea of a seperate mouse co-processor, and all
>windows having their contents stored in a raster rather than having
>to repaint when they overlapped.

You mean like the Amiga's SMART_REFRESH windows? When I started
writing Windows programs, I was quite disgusted to discover that
the contents of my window would be erased if another window opened
over top of it, and that I was responsible for restoring it.
Oddly enough, my old Amiga 1000, with its paltry 68000, a couple
of support chips, and 512K of RAM, didn't seem to have much trouble
with the overhead that modern-day programmers with a Pentium 4 will
still claim is intolerably high...

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Jeff Kenton

unread,
Jul 2, 2003, 7:57:21 PM7/2/03
to

Charlie Gibbs wrote:
...

>
> You mean like the Amiga's SMART_REFRESH windows? When I started
> writing Windows programs, I was quite disgusted to discover that
> the contents of my window would be erased if another window opened
> over top of it, and that I was responsible for restoring it.
> Oddly enough, my old Amiga 1000, with its paltry 68000, a couple
> of support chips, and 512K of RAM, didn't seem to have much trouble
> with the overhead that modern-day programmers with a Pentium 4 will
> still claim is intolerably high...

Agreed, but you're cheating a little here. The support chips included
state-of-the-art graphics processing without loading down the 68000.

jeff (one-time authorized Amiga reseller)


--

-------------------------------------------------------------------------
= Jeff Kenton Consulting and software development =
= http://home.comcast.net/~jeffrey.kenton =
-------------------------------------------------------------------------

It is loading more messages.
0 new messages