Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Are sockets the wave of the future?

13,741 views
Skip to first unread message

Tom Reingold

unread,
Aug 23, 1990, 9:54:52 PM8/23/90
to
At my job, we are about to write some applications for running over
TCP/IP on Unix hosts. We would like to think ahead with portability in
mind. One day, our code may run on a non-unix host. And if we write
it in sockets, we may run against a version that is built upon and
supports only STREAMS. Or will we?

1. What implementations are built on STREAMS?

2. Are they new or old?

3. Are all TCP/IP suites done with sockets nowadays?

4. Is there a reason to consider implementing our code in STREAMS?

5. Currently, our hosts run System V release 3.[23] and have both
libraries. Which is "the way" to go?

--
Tom Reingold
t...@samadams.princeton.edu
rutgers!princeton!samadams!tr
201-577-5814
"Brew strength depends upon the
amount of coffee used." -Black&Decker

Rod King (Sun HQ Consulting)

unread,
Aug 24, 1990, 4:35:20 PM8/24/90
to
In article <21...@rossignol.Princeton.EDU> t...@samadams.princeton.edu (Tom Reingold) writes:
>At my job, we are about to write some applications for running over
>TCP/IP on Unix hosts. We would like to think ahead with portability in
>mind. One day, our code may run on a non-unix host. And if we write
>it in sockets, we may run against a version that is built upon and
>supports only STREAMS. Or will we?

With respect to Unix systems, System V r4 (SVr4) presents the
Transport Level Interface (TLI). This is the transport layer interface that
will obsolete sockets; there will be STREAMS modules existing underneath
the interface. TLI is modeled after the ISO Transport Service Definition
(ISO 8072), so presumably, if your non-unix host has a similiar facility,
porting shouldn't be that bad.

>3. Are all TCP/IP suites done with sockets nowadays?

SVr4 (or at least Sun's version) will be using STREAMS.

I'm not sure of the commercial availability of TLI in a Unix system
(SunOS 4.1, the current version, does NOT support it).

By the way, if your application can use some sort of RPC facility, then
you can be shielded from all of this.

If you want some TLI documentation, refer to "Network Programming Guide",
from Sun Microsystems (part 800-3850-10).

Good luck!

Rod

James H. Coombs

unread,
Aug 24, 1990, 3:12:12 PM8/24/90
to

>
>Date: Fri, 24 Aug 90 01:54:52 GMT
>From: Tom Reingold <cs!samadams.pr...@PRINCETON.EDU>

>
>At my job, we are about to write some applications for running over
>TCP/IP on Unix hosts. We would like to think ahead with portability in
>mind.

>3. Are all TCP/IP suites done with sockets nowadays?

Definitely not.

>5. Currently, our hosts run System V release 3.[23] and have both
>libraries. Which is "the way" to go?

Sun now states that no new applications should be developed at the socket
level. They recommend 1) RPC or 2) TIL(?--which starts with SunOS 4.1). RPC
has several advantages:

1. The library is available through ftp.
2. You can work at a high level with minimal concern for networking details.
3. Client and server can be bound together into a single process without
modifying the code (although you may want to eliminate the code that
would normally establish a connection).
4. Various transport mechanisms may be used under the RPC interface. The
package supports sockets and raw buffers. Future versions will probably
use TIL(?), which in turn provides some independence from lower levels.
5. The application-specific protocol can be developed relatively invisibly
by defining c-type structures. The programmer does not have to think
about sending a long, using htonl(), etc. If it is convenient to change
a long to a short, then rebuilding the application should update both the
client and the server (assuming the proper make dependencies). One is
less likely to try to send a long to a process that is waiting for a short.

On the down side:

1. It is a little trickier to do something like have the server fork
immediately after accepting a connection.
2. There may remain a need to drop down to the transport layer for such
details as keepalive options on sockets.

Whatever you decide, you should take a good look at RPC. I haven't been fully
converted yet, but I will certainly be influenced by the library even if I
decide to stay with sockets (and my object-oriented server building block).

--Jim

Dr. James H. Coombs
Chief Architect
Institute for Research in Information and Scholarship (IRIS)
Brown University, Box 1946
Providence, RI 02912
ja...@brownvm.bitnet
Acknowledge-To: <JAZBO@BROWNVM>

Ran Atkinson

unread,
Aug 24, 1990, 6:00:21 PM8/24/90
to
The network interfaces specified by the System V Interface Definition
and by the X/Open consortium are based on the STREAMS with TLI (Transport
Level Interface) that were originally developed at AT&T.

Since the System V socket-library is built on top of STREAMS/TLI,
an application written to use sockets will probably be slower on
a System V system that the same application written using STREAMS/TLI
natively. In general I think that the STREAMS/TLI approach is better
because details of the transport protocol used are appropriately hidden
unlike BSD sockets.

Certainly a lot of good software is out there using sockets and I don't
think that the socket library will disappear anytime soon, but for new
software I really think that STREAMS/TLI are a better approach --
especially if developing for a System V platform.

Dan Bernstein

unread,
Aug 24, 1990, 11:46:45 PM8/24/90
to
In article <900824210...@ucbvax.Berkeley.EDU> JA...@BROWNVM.BROWN.EDU ("James H. Coombs") writes:
> > From: Tom Reingold <cs!samadams.pr...@PRINCETON.EDU>
[ portability: sockets vs. streams ]

> > Which is "the way" to go?
> Sun now states that no new applications should be developed at the socket
> level. They recommend 1) RPC or 2) TIL(?--which starts with SunOS 4.1). RPC
> has several advantages:

For comparison, here's how my auth package measures up to your criteria.

> 1. The library is available through ftp.

The same is true of auth. /comp.sources.unix/volume22/auth/* on uunet.

> 2. You can work at a high level with minimal concern for networking details.

The same is true of auth. auth-util/* are sample applications, including
(for example) a shell script adaptation of trivial inews. 76 lines, with
comments and better error checking than the original.

> 3. Client and server can be bound together into a single process without
> modifying the code (although you may want to eliminate the code that
> would normally establish a connection).

auth is based on a client-server model, not a single-process RPC model,
so this does not apply. See below.

> 4. Various transport mechanisms may be used under the RPC interface.

The same is true of auth. The c.s.unix version of auth is based on
sockets; the same interface can be set up over practically any two-way
communications medium. You don't have to change code to use this. Note
that auth uses RFC 931 for authentication; it eliminates mail and news
forgery above TCP. auth-util includes a small set of wrappers that you
can put around sendmail to achieve this extra security with no effort.

> 5. The application-specific protocol can be developed relatively invisibly
> by defining c-type structures.

The auth programmer doesn't have to bother thinking about a protocol. He
need only use the software techniques he's comfortable with for writing
data to a file. The same techniques work for transferring data over the
network through auth.

> On the down side:
> 1. It is a little trickier to do something like have the server fork
> immediately after accepting a connection.

This is an advantage of auth, again reflecting a difference in
philosophy. auth was designed for a client-server model, so it doesn't
naturally adapt to single procedures run on different machines as part
of the same program. RPC was designed for remote procedure call, so it
doesn't naturally adapt to the (perhaps more common) client-server case.

> 2. There may remain a need to drop down to the transport layer for such
> details as keepalive options on sockets.

The same is true for auth. This is only a reflection of the ``problem''
that a high-level interface cannot anticipate all the possible
extensions of the low-level interface it's based on.

> Whatever you decide, you should take a good look at RPC. I haven't been fully
> converted yet, but I will certainly be influenced by the library even if I
> decide to stay with sockets (and my object-oriented server building block).

Whatever you decide, you should take a good look at auth. I have been
fully converted, as I wrote the package. If you want to develop
client-server applications and be sure that they'll work on the
networks of the UNIX of the future, auth is the way to go.

---Dan

Vernon Schryver

unread,
Aug 25, 1990, 6:00:34 PM8/25/90
to
In article <1990Aug24.2...@murdoch.acc.Virginia.EDU>, rj...@paisley.cs.Virginia.EDU (Ran Atkinson) writes:
> ...[one of many paeans to TLI]...


AT&T and others have been selling TLI with socket libraries for almost 4 years.
For about that long, I've been asking about performance and compatibility.
I have been privately told many unflattering stories, but have still not
found any customers or vendors who will speak publically or authoratively.

How fast are TCP user-process-to-user-process byte transfer over TLI?
How compatible are the several socket libraries and kernel-emulators?

A good TCP-with-sockets benchmark is the BRL benchmark "ttcp". Since ttcp
compiles and runs directly over 4.3BSD compatible systems, it would be a
good measure of both speed and compatibilty. FTP is not interesting in
this context, because it measures many things, not least file system
performance. Ttcp is available from several places via FTP. At least one
vendor, and rumor has it others soon, ship both source and object for ttcp
in standard products.


Vernon Schryver
v...@sgi.com

Rayan Zachariassen

unread,
Aug 25, 1990, 7:04:24 PM8/25/90
to
Regardless of what the wave of the future is, presently if you write to
the TLI interface you won't be able to compile your code on a socket-only
system whereas if you use the socket interface you'll be portable to most
TLI systems (since they usually come with socket interface libraries).
If you aren't concerned about optimal efficiency, writing to the socket
interface now would be more portable.

obe...@rogue.llnl.gov

unread,
Aug 25, 1990, 9:34:37 PM8/25/90
to
In article <900824210...@ucbvax.Berkeley.EDU>, JA...@BROWNVM.BROWN.EDU

("James H. Coombs") writes:
>
> Sun now states that no new applications should be developed at the socket
> level. They recommend 1) RPC or 2) TIL(?--which starts with SunOS 4.1). RPC
> has several advantages:
>

This is very disturbing. The Sun RPC is propriatary and I don't believe a part
of the DOD protocol suite. It is also NOT the RPC in the OSF DCE. As a result I
don't think software written for the SUN RPC is going to be very portable when
compared to socket or stream binding.

We've already seen a bit of this attitude in the Sun network management
software. It supports access only by RPC, not SNMP. This makes out Suns more
difficult to manage than most any box on the net which is manageable!

R. Kevin Oberman
Lawrence Livermore National Laboratory
Internet: obe...@icdc.llnl.gov
(415) 422-6955

Disclaimer: Don't take this too seriously. I just like to improve my typing
and probably don't really know anything useful about anything.

Charles Hedrick

unread,
Aug 25, 1990, 10:59:17 PM8/25/90
to
Sun RPC is proprietary? (1) RFC 1057 documents the spec. (2) I'm
reasonably sure that they posted an implementation of RPC to the
network some time ago, and later on a revised version. I'm still not
sure I'd write applications intended to be portable using it, but they
may be right. The advantage is that you could move to ISO or anything
else by changing the lower layers, and the application would not be
affected. I get the impression this is the reason they made that
recommendation.

Vernon Schryver

unread,
Aug 26, 1990, 2:17:44 AM8/26/90
to

I bet the OSF/HP/Apollo/DCE and Netwise people, to name only 2, would not
be happy to be declared out of the race to define the standard RPC protocol.
Last I heard from them, they are each defining The Emerging Standard.


Vernon Schryver, v...@sgi.com

Warner Losh

unread,
Aug 26, 1990, 2:53:46 AM8/26/90
to
In article <Aug.25.22.59....@athos.rutgers.edu>

hed...@athos.rutgers.edu (Charles Hedrick) writes:
>The advantage is that you could move to ISO or anything
>else by changing the lower layers, and the application would not be
>affected. I get the impression this is the reason they made that
>recommendation.

The disadvantage is that you can't write programs like FTP or sendmail
using the RPC protocol. Not programs that will interoperate with
other FTP's and sendmails, at any rate.

While RPC is good for some things, it is not the answer to all the
networking problems. Sometimes you just gotta write at a fairly low
level to interoperate with other programs.

Warner
--
Warner Losh i...@Solbourne.COM
Me? I'm the onion rings.

Dan Bernstein

unread,
Aug 26, 1990, 12:42:18 PM8/26/90
to
In article <1990Aug26.0...@Solbourne.COM> i...@dancer.Solbourne.COM (Warner Losh) writes:
> In article <Aug.25.22.59....@athos.rutgers.edu>
> hed...@athos.rutgers.edu (Charles Hedrick) writes:
> >The advantage is that you could move to ISO or anything
> >else by changing the lower layers, and the application would not be
> >affected.
> The disadvantage is that you can't write programs like FTP or sendmail
> using the RPC protocol. Not programs that will interoperate with
> other FTP's and sendmails, at any rate.

auth provides that advantage without that disadvantage! Again, it was
designed for client-server applications, unlike RPC. From the README:

This package provides two benefits. The first is a secure user-level
implementation of RFC 931, the Authentication Server; unless TCP itself
is compromised, it is impossible to forge mail or news between computers
supporting RFC 931. The second is a single, modular interface to TCP.
Programs written to work with authtcp and attachport don't even need to
be recompiled to run under a more comprehensive network security system
like Kerberos, as long the auth package is replaced.

The base package includes authtcp, a generic TCP client; attachport, a
generic TCP server; authd, a daemon supporting RFC 931; and authuser, a
compatibility library letting you take advantage of RFC 931 from older
applications.

authutil is a big pile of miscellany illustrating how to use auth.
Directories: aport - support programs for authtcp and attachport, making
server control easy; clients - various sample Internet clients,
including a short shell script implementation of trivial inews (with
RFC 931 security, of course); sendmail-auth - a small set of wrappers
you can put around sendmail to achieve full username tracking; servers -
various sample Internet servers, including a secure fingerd that
wouldn't have let RTM in; tam - Trivial Authenticated Mail, a complete
mail system in just 200 lines of code; and util - various short
utilities that everyone should have.

> While RPC is good for some things, it is not the answer to all the
> networking problems.

Agreed. It was designed for remote procedure call and does that quite
reasonably.

> Sometimes you just gotta write at a fairly low
> level to interoperate with other programs.

I don't think this is true: auth's interface is very high level.

---Dan

Bob Page

unread,
Aug 26, 1990, 7:10:24 PM8/26/90
to
> Sun RPC is propriatary

What does proprietary mean to you? The RPC/XDR specs have been
published for some time as RFCs 1057 (RPC) and 1014 (XDR). RPC/XDR
source code (from Sun) is available for ftp from many places, like
titan.rice.edu. A freely redistributable NFS implementation was built
(not by Sun, but by members of the Internet community) on top of the
RPC/XDR source code. Some vendors have products based on the source.

> We've already seen a bit of this attitude in the Sun network management
> software. It supports access only by RPC, not SNMP.

Whoa -- Reality check. Last October at Interop '89, a Sun workstation
running SunNet Manager was in the ACE (now Interop Inc) booth
monitoring _all_ the network gateways (cisco, SynOptics, Proteon,
etc). A number of vendors (SynOptics, Cabletron, Network General,
Xyplex, and more) were running SunNet Manager in their respective
booths to show how they interoperated with the product. The package
was also running in the show's SNMP Interoperability booth. All this
communication was done via SNMP, not RPC.

> Disclaimer: Don't take this too seriously. I just like to improve my typing
> and probably don't really know anything useful about anything.

Sounds like good advice.

..bob
--
Bob Page Sun Microsystems, Inc. pa...@eng.sun.com

Geoff Arnold @ Sun BOS - R.H. coast near the top

unread,
Aug 26, 1990, 8:54:29 PM8/26/90
to
Quoth brn...@kramden.acf.nyu.edu (Dan Bernstein) (in <8076:Aug2616:42:18...@kramden.acf.nyu.edu>):
#auth provides that advantage without that disadvantage! Again, it was
#designed for client-server applications, unlike RPC.

[smiley mode on]

Dear Dan,

You seem to understand these things, so maybe you can help me
with a little semantic problem. Every time I feed a .x file to rpcgen,
it insists on spitting out client and server stubs, which I find
convenient for building my distributed applications. Yet you say that
RPC wasn't designed for client-server applications. I'm confused...

Geoff Arnold
PC-NFS architect


-- Geoff Arnold, PC-NFS architect, Sun Microsystems. (ge...@East.Sun.COM) --

To receive a full copy of my .signature, please dial 1-900-GUE-ZORK.
Each call will cost you one zorkmid.

obe...@amazon.llnl.gov

unread,
Aug 27, 1990, 2:16:56 PM8/27/90
to
In article <PAGE.90Au...@swap.Eng.Sun.COM>, pa...@Eng.Sun.COM (Bob Page) writes:

> What does proprietary mean to you? The RPC/XDR specs have been
> published for some time as RFCs 1057 (RPC) and 1014 (XDR). RPC/XDR
> source code (from Sun) is available for ftp from many places, like
> titan.rice.edu. A freely redistributable NFS implementation was built
> (not by Sun, but by members of the Internet community) on top of the
> RPC/XDR source code. Some vendors have products based on the source.

*Sigh* This one depends on your definition of "proprietary". I once was bashed
for saying that DECnet is not proprietary. It's specs have been published and
are freely available. There are several implementations. But DEC owns DECnet
and, until SUN places RPC in the pulic domain, SUN owns RPC. It's
implementations are many and on many systems, but it is still owned by Sun.
Frankly, this is a bogus issue and I should not have raised it.



>> We've already seen a bit of this attitude in the Sun network management
>> software. It supports access only by RPC, not SNMP.
>
> Whoa -- Reality check. Last October at Interop '89, a Sun workstation
> running SunNet Manager was in the ACE (now Interop Inc) booth
> monitoring _all_ the network gateways (cisco, SynOptics, Proteon,
> etc). A number of vendors (SynOptics, Cabletron, Network General,
> Xyplex, and more) were running SunNet Manager in their respective
> booths to show how they interoperated with the product. The package
> was also running in the show's SNMP Interoperability booth. All this
> communication was done via SNMP, not RPC.

Sorry, but you're wrong. The SunNet Manager recieves SNMP from all of the
various sources. But I have another SNMP manager. (Several, in fact.) And guess
what? I can monitor my routers (Wellfleet, cisco, Proteon) and my VMS systems.
But not Suns. Why? Sun does not have an SNMP agent. When I complained to my Sun
salesbeing I was told that I didn't need one. SunNet Manager accesses the data
from Suns by RPC. The problem is that I don't want SunNet Manager. I personally
prefer others.

But the bottom line is that both streams and sockets are much more "portable"
than RPC (at least for now).



>>> Disclaimer: Don't take this too seriously. I just like to improve my typing
>>> and probably don't really know anything useful about anything.
>>
>> Sounds like good advice.

That's why it's there.

Kevin

Dan Bernstein

unread,
Aug 27, 1990, 3:29:02 PM8/27/90
to
In article <24...@east.East.Sun.COM> ge...@east.sun.com (Geoff Arnold @ Sun BOS - R.H. coast near the top) writes:
> Quoth brn...@kramden.acf.nyu.edu (Dan Bernstein) (in <8076:Aug2616:42:18...@kramden.acf.nyu.edu>):
> #auth provides that advantage without that disadvantage! Again, it was
> #designed for client-server applications, unlike RPC.
> Dear Dan,
> You seem to understand these things, so maybe you can help me
> with a little semantic problem. Every time I feed a .x file to rpcgen,
> it insists on spitting out client and server stubs, which I find
> convenient for building my distributed applications. Yet you say that
> RPC wasn't designed for client-server applications. I'm confused...

Dear Geoff,

Let me illustrate with a trivial example: TAM, Trivial Authenticated
Mail, included in authutil. It is a complete mail system, including a
short shell script for sending mail, a shorter shell script daemon to
receive mail on port 209, programs to set up, print, and empty your
TAMbox, and scripts that convert TAM to regular mail and easy-to-read
formats. It uses an extensible protocol. It is much more secure than
sendmail: since it is implemented on top of auth, *all* forgeries above
TCP are stopped. (Most, if not all, forgeries at a typical university
are done without breaking TCP. auth completely eliminates that problem.)

TAM is short enough to be bugfree, doesn't run as root, and includes the
niceties you'd expect of a friendly mail system: sending you copies of
what you send out, including the TCP address of received mail in case of
DNS trouble, not loading the header with mounds of junk, and reading
mail with no delay.

All the code necessary to set up TAM, including security checks,
/etc/services and /etc/rc.local modifications, and comments, takes 241
lines. That's 5K. In contrast, the README, protocol description, and man
pages take 12K.

Finally, TAM will be trivial to port to any communications system
supporting the same interface for reliable, sequenced, two-party stream
communication.

Try to set up an RPC-based mail system with the features listed above.
You'll quickly appreciate the fact that RPC and client-server are quite
different concepts.

---Dan
``Networking systems so powerful that you can send mail around the
world, with a minimal risk of forgery, in just 5K of code including the
mail reader. Science fiction!'' ---hypothetical Internet guru, 1988

Werner Vogels

unread,
Aug 27, 1990, 3:46:34 PM8/27/90
to
In article <Aug.25.22.59....@athos.rutgers.edu> hed...@athos.rutgers.edu (Charles Hedrick) writes:

Don't think you can move to OSI by just "changing a few layers". There
is a lot of layer specific information crossing layer boundaries so if
you view RPC as session and XDR as presentation layer there is a lot
the be changed.

You should read the last part of M.T. Rose's The Open Book on this subject.

Sun's RPC isn't the only RPC mechanisme in the world. See the current ACM
SIGOP issue for a comparison of about 10 of them. Sun has the advantage
that all the NFS implementers had to use SUN RPC to be interconnectable. And
when it's there why not use it for other things as well??? But this doen't
mean it has been chosen by the network community as being the best possible
interface for writing client/server software. (for those who think SUN
invented remote procedure calls, it was there before SUN was born, developed,
as many amazing things, by XEROX PARC)

I hate to say it, but if you want a really safe bet, use sockets. Every
system will have a socket libary hanging around for the next ten years.


Werner H.P. Vogels

Software Expertise Centrum
Haagse Hogeschool, Intersector Informatica tel: +31 70 618419
Louis Couperusplein 2-19, 2514 HP Den Haag E-mail: wer...@nikhefk.nikhef.nl
The Netherlands or wer...@hhinsi.uucp

Charles Hedrick

unread,
Aug 27, 1990, 4:30:24 PM8/27/90
to
Your complaint about SunNet has nothing to do with the portability or
lack thereof of RPC. RPC -- whatever Sun may say -- is not an
alternative to streams or sockets. Streams and sockets are ways of
accessing the IP or TCP level directly. RPC imposes on top of that a
data encoding standard and some other communications standards. It's
roughly comparable to ASN.1 plus a bit of mechanism to identify
applications.

I certainly agree with your complaint that Sun should have implemented
host monitoring using SNMP, but not because of any unportability in
RPC. RPC is at least as portable as ASN.1. Indeed at the time the
SNMP standard was issued, Sun had already posted RPC to the net, and
everybody had to write ASN.1 parsers in order to implement SNMP. So
SNMP would have been more portable if it had been written using RPC.
But it wasn't. So the problem with rolling your own network
monitoring protocol on top of RPC isn't that RPC is unportable, but
that you're rolling your own network monitoring protocol when there
already exists a standard one.

At any rate, it seems clear that the advice to use RPC is for people
who are writing their own applications. Nobody claims that RPC is
going to replace raw TCP for implementing FTP. Give me a break. The
claim is that if you want to write a high-level application, RPC
handles issues of data portability between different architectures
(byte order, floating point format, etc.), and will allow you to move
between TCP/IP and ISO when RPC is implemented over ISO. That still
seems reasonable advice. However RPC is not unique in this. There
are competing mechanisms at the same. Since one of the primary goals
of the industry groups is to make sure that Sun doesn't ever repeat
their success with NFS, you can be sure hell will freeze over before
OSF or anyone else adopts RPC. But at the moment there is specific
alternative with overwhelming support. Since NFS is so widely
available, and having NFS means that you have to have RPC, that seems
to guarantee wide support for RPC. Thus until the industry converges
on a single alternative, RPC seems a reasonable choice.

Of course this doesn't address the original question, which is
whether to use sockets or streams to access TCP. I'm going to do
that in a separate response.

Charles Hedrick

unread,
Aug 27, 1990, 5:09:48 PM8/27/90
to
Before we got diverted, the original question was whether code that
needs to access the network should use sockets or streams. Since
then, I've looked up the streams documentation in SunOS 4.1. I'm
going to assume that's typical of what streams is like, but of course
that could be wrong. So take this with a grain of salt.

In my opinion, if you want to support a full range of systems, you're
going to have to deal with both sockets and streams. So that's not
the basic design choice. It's also not a big issue anyway. All
network code that I've seen has subroutines for doing the low-level
network operations such as openning connections. These are not
complex subroutines. Maybe half a page each. They just do the right
combination of socket, bind, connect, etc. So the streams version is
going to have another version of the subroutine to open a connection,
that uses t_bind and t_connect instead of bind and connect. Big deal.
Similarly, data transfer subroutines can use send and recv for
Berkeley and t_snd and t_rcv for the streams version.

The real issue seems to be not this, but the problem that streams
doesn't fit the normal Unix view of I/O. At least in SunOS 4.1, you
can't do read and write on a stream. Thus the special t_snd and t_rcv
calls for I/O. Sockets allow you to use either special send and recv
calls, which allow more detailed control over network-level handling,
or normal read and write. But SunOS provides a streams module you can
push that gives you read and write. It can't deal with out of band
data, but if you know your application doesn't use OOB, it might be
usable.

It seems clear that you can get streams/sockets compatibility by doing
everything with subroutines and supplying streams and sockets versions
for everything. But there are two questions that can really only be
answered by people who have experience with using streams:

(1) What is the performance penalty for using the read/write interface
in streams? With sockets, the send/recv interface is at the same
level as the read/write interface, so there's no reason to expect any
performance penalty for using read and write. Indeed it's very rare
that you see programs using send and recv. This allows you to use
things like printf, and to set primary input or output to a socket.
With a stream, if you want to do this, you have to push on the
read/write interface. I could imagine ways of implementing it that
wouldn't result in any more overhead than doing the low-level I/O, but
there's no way to know whether this will happen in real
implementations other than trying it. If read/write turns out to be
unacceptable under streams, then you'll need to go to the approach of
using subroutines or macros for your low-level code, so that you can
supply both socket and streams versions. (By the way, the original
ATT claim was that sockets were a terrible wart on Unix, and streams
were "clean". I'm not sure what -- if anything -- that meant. It
seems to me that sockets makes network I/O look a lot more like normal
file I/O than streams do.)

(2) Is it a good idea to use the "sockets library"? Comments have
been made about both overhead and portability. Again, this is an
issue that only experience can settle. Most applications of sockets
that I've seen use read and write. In this case, all you need the
sockets library for is to open and close the connection. Once it's
open, you're going to use read and write directly, which will not need
to pass through any sockets emulation. So this seems to reduce to the
previous question, of whether the read/write interface to streams has
too much overhead. Whether it makes sense to use the sockets library
for opening and closing seems to reduce to the issue of how good the
sockets libraries are and how compatible the streams implementations
are. Clearly streams itself is just a framework. It's only the
actual device drivers and streams modules that determine whether two
implementations look at all alike. One could imagine a world in which
each streams implementation looks different, but all their socket
emulation libraries are fairly compatible. One could also imagine a
world in which everyone used the same streams code, but the socket
libraries are very flaky. In the first case, you'd be better off to
use sockets, in the second you'd be better off to use streams.

Since it's hard to get reliable information on any of these topics, I
think I'd make sure that my code is designed in such a way that you
can handle either case. That is, I'd run all network operations --
opening, closing, and actual I/O -- through low-level subroutines or
macros that are designed so you can implement them with either sockets
or streams.

Super user

unread,
Aug 27, 1990, 5:51:37 PM8/27/90
to

Any obituaries for the sockets programming interface are a bit premature.

I admit that for many systems, Streams are preferable to sockets. However
most streams based systems also layer a sockets interface on top of
the "native" streams. "Why is this?", you may ask. How many public domain
programs do you see using the sockets interface? How many do you see
using the streams interface?
People who want to make their jobs easier will try to
build upon the existing code base. Much of that code base written to
run on sockets.

In addition to the inertia of all that existing code, some
systems only support the sockets interface. For instance, many embedded
systems are fairly lean implementations that do not include a streams
implementation. I would bet that most routers, terminal servers,
etc are built on sockets interfaces. If you had to add a new capability
(like SNMP for instance) to such a device, you would use the existing
services.

There are a lot of weird devices now supporting TCP/IP. For instance,
I just worked on a implementation team that ported TCP/IP (and sockets)
to HP BASIC workstations, believe that or not. It was difficult
enough to make a usable BASIC->sockets interface. Streams for basic would
be incredibly weird. In order
to bring a large number of diverse systems into the internet fold, there
must be a least common denominator. Right now, sockets appears to be
the least common demoninator. It is not pretty, but it works.

Besides, the awkward style provides a certain amount of job security (8-)).


Bill VerSteeg
Network Research Corp
b...@nrc.com
--
Bill VerSteeg
internet b...@nrc.com
UUCP gatech.edu!galbp!bagend!bvsatl!bvs

Michael O'Dell

unread,
Aug 27, 1990, 9:11:14 PM8/27/90
to
There is a REALLY good reason why XDR and Courier look similar...

Duff's law:
Don't waste time having good ideas when you can steal better ones!

-Mike

Bill Melohn

unread,
Aug 28, 1990, 2:10:38 AM8/28/90
to
>In article <900824210...@ucbvax.Berkeley.EDU>, JA...@BROWNVM.BROWN.EDU
>("James H. Coombs") writes:
>
> Sun now states that no new applications should be developed at the socket
> level. They recommend 1) RPC or 2) TIL(?--which starts with SunOS 4.1). RPC
> has several advantages:


If this is stated in any Sun documentation or sales literature, it is
in error. Sun Microsystems supports both the sockets and TLI network
programming interfaces, as well as Sun RPC. In our SunOS 4.1 release
TLI is implemented as a "compatiblity module" to the native socket
implementation. In SVR4, sockets are implemented as a "compatiblity
module" within the streams framework. All Internet services in both
implementations are written using the socket interface.

Ron Stanonik

unread,
Aug 28, 1990, 9:30:00 AM8/28/90
to
> Dear Dan,
> You seem to understand these things, so maybe you can help me
> with a little semantic problem. Every time I feed a .x file to rpcgen,
> it insists on spitting out client and server stubs, which I find
> convenient for building my distributed applications. Yet you say that
> RPC wasn't designed for client-server applications. I'm confused...

RPC seems suitable for networking your application if your application
can be implemented using function call/return. It doesn't seem suitable
for networking your application if your application simply blasts a variable
(and perhaps voluble) amount of text to the user's screen (or into a file).
The non-network implementation of such usually consists of write/puts/printf
to stdout, but RPC doesn't seem to contain a stream type, such that you keep
reading from it until EOF.

Ron Stanonik
stan...@nprdc.navy.mil

Henry Spencer

unread,
Aug 28, 1990, 12:24:00 PM8/28/90
to
In article <Aug.27.17.09....@athos.rutgers.edu> hed...@athos.rutgers.edu (Charles Hedrick) writes:
>... (By the way, the original

>ATT claim was that sockets were a terrible wart on Unix, and streams
>were "clean". I'm not sure what -- if anything -- that meant. It
>seems to me that sockets makes network I/O look a lot more like normal
>file I/O than streams do.)

It is important to distinguish "streams" (Dennis Ritchie's term for his
revised non-block-device i/o system) from "STREAMS" (what AT&T put into
System V). Dennis's streams cleaned up a lot of mess, and improved
performance to boot. But as Dennis is rumored to have said, "`streams'
means something different when shouted".

The way to do i/o on Dennis's streams was with "read" and "write".
Network i/o, in general, looked *exactly* like local device i/o. This
is the way it should be, unlike what both Berkeley and AT&T have done
(both have reluctantly conceded that most people want to use "read"
and "write" and have made that work, but their hearts were clearly
elsewhere).
--
TCP/IP: handling tomorrow's loads today |Henry Spencer at U of Toronto Zoology
OSI: handling yesterday's loads tomorrow| he...@zoo.toronto.edu utzoo!henry

Guy Harris

unread,
Aug 28, 1990, 4:05:05 PM8/28/90
to
>The real issue seems to be not this, but the problem that streams
>doesn't fit the normal Unix view of I/O. At least in SunOS 4.1, you
>can't do read and write on a stream.

Well, yes, you can, actually. You have programs that do "read()" and
"write()" on terminals under SunOS 4.x, right? If so, they're doing
"read()" and "write()" on streams....

What you can't do in SunOS 4.1 - nor in the S5R3 systems from which much
of that code came - is "read()" and "write()" on *TLI* streams that
don't have the "tirdwr" module pushed atop them. That module, which you
mention, actually comes from S5R3.

In S5R[34], streams isn't really the equivalent of sockets, streams
*plus TLI* is.

(I wouldn't be at all surprised to find that in the Research UNIX
streams code, you *can* do "read()" and "write()" on streams used for
network connections. I don't know if this is the case, but I wouldn't
be surprised if it were....)

Barry Margolin

unread,
Aug 28, 1990, 5:22:41 PM8/28/90
to
In article <7...@exodus.Eng.Sun.COM> mel...@mrbill.Eng.Sun.COM (Bill Melohn) writes:
>>In article <900824210...@ucbvax.Berkeley.EDU>, JA...@BROWNVM.BROWN.EDU
>>("James H. Coombs") writes:
>> Sun now states that no new applications should be developed at the socket
>> level.
>If this is stated in any Sun documentation or sales literature, it is
>in error.

It's stated in *boldface* at the beginnings of chapters 10 and 11 of the
SunOS 4.1 Network Programming Guide:

WARNING: Socket-based interprocess communication (IPC), while still
supported, is no longer the preferred framework for transport-level
programming....

If you are building a new network application that requires direct
access to transport facilities, use the TLI mechanisms.... New
programs should not be based on sockets.

Are you saying that Sun does not actually suggest that TLI be preferred
over sockets for new programs?

Of course, if a program is intended to be portable to other systems that
only have sockets then Sun's recommendation should be ignored. And Sun
will have to continue to support sockets for the foreseeable future, so
such programs will also be portable to SunOS.
--
Barry Margolin, Thinking Machines Corp.

bar...@think.com
{uunet,harvard}!think!barmar

Bill Melohn

unread,
Aug 29, 1990, 2:34:42 AM8/29/90
to
In article <1990Aug28.2...@Think.COM> bar...@think.com (Barry Margolin) writes:
>Are you saying that Sun does not actually suggest that TLI be preferred
>over sockets for new programs?

Yes. Sun makes no official recommendation as to whether users should
use the socket or TLI method of accessing the network. Both are fully
supported in SunOS, and both have useful features, as does the RPC
method. In fact, we use all three methods in various peices of the
utilities bundled with the operating system. A bug report has now been
filed on the warning message at the beginning of Chapter 10 and 11 in
the Network Programmers Guide.

Ron Stanonik

unread,
Aug 29, 1990, 9:43:00 AM8/29/90
to
Wollongong's win 3b implementation of tcp/ip (on our 3b2's running
sysVr3) seems to only push tirdwr for the accept socket call. We've
installed a couple of bsd programs (syslog and lpr), which use
read/write and seem to work just fine.

Anybody know what tirdwr does? Gather/scatter packets to satisfy
the read/write size argument? Some oob handling?

Ron Stanonik
stan...@nprdc.navy.mil

ps. Is there anyway to get a list of all the modules pushed onto
a stream. I_LOOK only lists the top most module.

James B. Van Bokkelen

unread,
Aug 29, 1990, 10:48:23 AM8/29/90
to

The way to do i/o on Dennis's streams was with "read" and "write".
Network i/o, in general, looked *exactly* like local device i/o. This
is the way it should be, unlike what both Berkeley and AT&T have done
(both have reluctantly conceded that most people want to use "read"
and "write" and have made that work, but their hearts were clearly
elsewhere).

I would say rather that using read/write on network connections is
the way most people would *like* it to be. The reality is that on
most systems the local filesystem is a pretty tame beast compared to
a network connection. Unless the OS/language combination's
read/write was designed with network connections in mind (which means
boolean flag arguments and wide variations in behaviour depending on
them), use of read/write is likely to result in a cantankerous and
unreliable network application...

James B. VanBokkelen 26 Princess St., Wakefield, MA 01880
FTP Software Inc. voice: (617) 246-0900 fax: (617) 246-0901

Guy Harris

unread,
Aug 29, 1990, 5:04:25 PM8/29/90
to
>RPC seems suitable for networking your application if your application
>can be implemented using function call/return. It doesn't seem suitable
>for networking your application if your application simply blasts a variable
>(and perhaps voluble) amount of text to the user's screen (or into a file).

Actually, there is an application I use that uses RPC to blast variable
amounts of text into a file; it's the UNIX copy program. Since my
machine is diskless, any such copies go over NFS, which runs atop
RPC....

This does, of course, involve more than just the RPC code; it also
involves code that sits atop RPC, including code that turns "read()"s
and "write()"s into the NFS requests sent over RPC. As such, you might
have to write similar stuff yourself if you were to use RPC for bulk
data transport in your application, and it wouldn't plug into "read" and
"write" in most UNIX systems without some work.

Guy Harris

unread,
Aug 29, 1990, 5:11:17 PM8/29/90
to
>Are you saying that Sun does not actually suggest that TLI be preferred
>over sockets for new programs?

I suspect what he's saying that that different people within Sun say
different things, and that he doesn't agree with the authors of those
statments in chapters 10 and 11. Which of those people speak for Sun,
if any, is a different matter. (Sun is sufficiently large that it's not
clear the statement "Sun says XXX" is necessarily meaningful; even if
one particular Sun document says "do XXX", that may not mean it's
official corporate policy.)

>Of course, if a program is intended to be portable to other systems that
>only have sockets then Sun's recommendation should be ignored. And Sun
>will have to continue to support sockets for the foreseeable future, so
>such programs will also be portable to SunOS.

And S5R4 has a sockets interface, at least for TCP/UDP/IP (given that
Berkeley has, I think, modified the sockets interface for ISO - just as
AT&T had to modify the STREAMS/TLI interface for TCP, by putting in the
notion of an "urgent mark" - I don't know whether, even if you can use
the sockets library to talk to other protocols, you would do so in the
same way you did under 4.3-Reno or 4.4BSD).

And then you can look forward to POSIX's networking interfaces; I don't
know if either of them (DNI in particular) will look like sockets,
STREAMS+TLI, or none of the above....

for...@minster.york.ac.uk

unread,
Aug 30, 1990, 6:59:36 AM8/30/90
to

True enough, but surely the key words are `good' and `better'!
You must seek the source of, and distil, the `better ideas'
from the model, not blindly copy all the bad ideas too.
(`Consult the genius of the place in all')
Failing that, you must at least make a good job of the theft!

Steven Grimm

unread,
Aug 30, 1990, 3:38:39 PM8/30/90
to
he...@zoo.toronto.edu (Henry Spencer) writes:
>(I find
>it impossible to comprehend why 4BSD doesn't have an open-connection-to-
>service-X-on-machine-Y library function, given how stereotyped and how
>messy this job is)

I wrote just such a library a few years ago; lots of people have found it
very useful. Here it is again; everyone should feel free to pass it on
and use it in whatever they like. There is no separate man page, but there
are comments at the start of each function, which should be adequate.

---
" !" - Marcel Marceau
Steven Grimm Moderator, comp.{sources,binaries}.atari.st
kor...@ebay.sun.com ...!sun!ebay!koreth

---
/*
** SOCKET.C
**
** Written by Steven Grimm (kor...@ebay.sun.com) on 11-26-87
** Please distribute widely, but leave my name here.
**
** Various black-box routines for socket manipulation, so you don't have to
** remember all the structure elements.
*/

#include <sys/types.h>
#include <sys/time.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <stdio.h>
#include <ctype.h>

#ifndef FD_SET /* for 4.2BSD */
#define FD_SETSIZE (sizeof(fd_set) * 8)
#define FD_SET(n, p) (((fd_set *) (p))->fds_bits[0] |= (1 << ((n) % 32)))
#define FD_CLR(n, p) (((fd_set *) (p))->fds_bits[0] &= ~(1 << ((n) % 32)))
#define FD_ISSET(n, p) (((fd_set *) (p))->fds_bits[0] & (1 << ((n) % 32)))
#define FD_ZERO(p) bzero((char *)(p), sizeof(*(p)))
#endif

extern int errno;

/*
** serversock()
**
** Creates an internet socket, binds it to an address, and prepares it for
** subsequent accept() calls by calling listen().
**
** Input: port number desired, or 0 for a random one
** Output: file descriptor of socket, or a negative error
*/
int serversock(port)
int port;
{
int sock, x;
struct sockaddr_in server;

sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0)
return -errno;

bzero(&server, sizeof(server));
server.sin_family = AF_INET;
server.sin_addr.s_addr = INADDR_ANY;
server.sin_port = htons(port);

x = bind(sock, &server, sizeof(server));
if (x < 0)
{
close(sock);
return -errno;
}

listen(sock, 5);

return sock;
}

/*
** portnum()
**
** Returns the internet port number for a socket.
**
** Input: file descriptor of socket
** Output: inet port number
*/
int portnum(fd)
int fd;
{
int length, err;
struct sockaddr_in address;

length = sizeof(address);
err = getsockname(fd, &address, &length);
if (err < 0)
return -errno;

return ntohs(address.sin_port);
}

/*
** clientsock()
**
** Returns a connected client socket.
**
** Input: host name and port number to connect to
** Output: file descriptor of CONNECTED socket, or a negative error (-9999
** if the hostname was bad).
*/
int clientsock(host, port)
char *host;
int port;
{
int sock;
struct sockaddr_in server;
struct hostent *hp, *gethostbyname();

bzero(&server, sizeof(server));
server.sin_family = AF_INET;
server.sin_port = htons(port);

if (isdigit(host[0]))
server.sin_addr.s_addr = inet_addr(host);
else
{
hp = gethostbyname(host);
if (hp == NULL)
return -9999;
bcopy(hp->h_addr, &server.sin_addr, hp->h_length);
}

sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0)
return -errno;

if (connect(sock, &server, sizeof(server)) < 0)
{
close(sock);
return -errno;
}

return sock;
}

/*
** readable()
**
** Poll a socket for pending input. Returns immediately. This is a front-end
** to waitread() below.
**
** Input: file descriptor to poll
** Output: 1 if data is available for reading
*/
readable(fd)
int fd;
{
return(waitread(fd, 0));
}

/*
** waitread()
**
** Wait for data on a file descriptor for a little while.
**
** Input: file descriptor to watch
** how long to wait, in seconds, before returning
** Output: 1 if data was available
** 0 if the timer expired or a signal occurred.
*/
waitread(fd, time)
int fd, time;
{
fd_set readbits, other;
struct timeval timer;
int ret;

timerclear(&timer);
timer.tv_sec = time;
FD_ZERO(&readbits);
FD_ZERO(&other);
FD_SET(fd, &readbits);

ret = select(fd+1, &readbits, &other, &other, &timer);
if (FD_ISSET(fd, &readbits))
return 1;
return 0;
}

Glenn P. Davis

unread,
Aug 30, 1990, 7:57:56 PM8/30/90
to

>RPC seems suitable for networking your application if your application
>can be implemented using function call/return. It doesn't seem suitable
>for networking your application if your application simply blasts a variable
>(and perhaps voluble) amount of text to the user's screen (or into a file).
>The non-network implementation of such usually consists of write/puts/printf
>to stdout, but RPC doesn't seem to contain a stream type, such that you keep
>reading from it until EOF.

>Ron Stanonik
>stan...@nprdc.navy.mil

Sun RPC includes a provision for 'batched' rpc's. The procedure call
doesn't wait for a return.
We have an set of applications based on Sun RPC which use this to "blast
variable and voluble amounts of data at a server.

Glenn P. Davis
UCAR / Unidata
PO Box 3000 1685 38th St.
Boulder, CO 80307-3000 Boulder, CO 80301

(303) 497 8643

C. Philip Wood

unread,
Aug 31, 1990, 10:03:49 AM8/31/90
to
>>Duff's law:
>> Don't waste time having good ideas when you can steal better ones!

> True enough, but surely the key words are `good' and `better'!


And who was it that said:

"'Better' is the enemy of 'good'."

Phil

Henry Spencer

unread,
Aug 30, 1990, 2:47:48 PM8/30/90
to
In article <900829144...@ftp.com> jb...@ftp.com writes:
> The way to do i/o on Dennis's streams was with "read" and "write".
> Network i/o, in general, looked *exactly* like local device i/o. This
> is the way it should be...

>
>I would say rather that using read/write on network connections is
>the way most people would *like* it to be. The reality is that on
>most systems the local filesystem is a pretty tame beast compared to
>a network connection. Unless the OS/language combination's
>read/write was designed with network connections in mind (which means
>boolean flag arguments and wide variations in behaviour depending on
>them), use of read/write is likely to result in a cantankerous and
>unreliable network application...

Only if you have a cantankerous and unreliable network. :-) True, the
network interface is more complex than most device interfaces (although
whether it is more complex than the tty interface, in particular, is a
debatable point!)... but most applications don't care. They just want
to open a connection to service X on machine Y and reliably send bits
back and forth. The complexities have to be present for the occasional
sophisticated customer, but the simple customer shouldn't have to worry
about them. The Unix tty interface is quite complex, but most programs
can ignore most of it -- if they want to print an error message, they
just say "fprintf(stderr, ..." and it works. That's the way the network
interface should be too: some simple way to open a connection (I find


it impossible to comprehend why 4BSD doesn't have an open-connection-to-
service-X-on-machine-Y library function, given how stereotyped and how

messy this job is), read/write for i/o, close for shutdown (and ioctl
for option setting etc. for those who care). That's all most customers
want.

The networking facilities in Eighth/Ninth/Tenth Edition Unix within
Bell Labs are existence proofs that this approach can and does work.


--
TCP/IP: handling tomorrow's loads today| Henry Spencer at U of Toronto Zoology

OSI: handling yesterday's loads someday| he...@zoo.toronto.edu utzoo!henry

Walter Underwood

unread,
Aug 31, 1990, 1:15:04 PM8/31/90
to
>The non-network implementation of such usually consists of write/puts/printf
>to stdout, but RPC doesn't seem to contain a stream type, such that you keep
>reading from it until EOF.

NCS 2.0 has exactly that feature -- indeterminate-length
streams of typed data. OSF cites that as one of the reasons
for adopting NCS 2.0 in the the OSF Distributed Computing
Environment. See the OSF DCE Rationale.

wunder

Keith Sklower

unread,
Aug 31, 1990, 3:48:37 PM8/31/90
to
In article <1990Aug28.1...@zoo.toronto.edu> he...@zoo.toronto.edu (Henry Spencer) writes:
>The way to do i/o on Dennis's streams was with "read" and "write".
>Network i/o, in general, looked *exactly* like local device i/o. This
>is the way it should be, unlike what both Berkeley and AT&T have done
>(both have reluctantly conceded that most people want to use "read"
>and "write" and have made that work, but their hearts were clearly
>elsewhere).

I find this inaccurate, partronizing and tiresome. I have worked around
Berkeley since 1978 and although was not a member of the actual unix group
in 1982 while TCP was being incorporated, attended their meetings and
seminars.

I assure you that it was the design goal then, that only
``sophisticated process'' would need a more elaborate mechanism to establish
a network connection, but that once having established one, that it should
be usable as a completely ordinary file descriptor by ``naive'' processes
like the date command, using read and write, and that the file descriptor
should be inherited by the normal unix means (fork & exec).

It sounds to me like Henry is attempting to rewrite history (for his
own possibly political motives).

James B. Van Bokkelen

unread,
Aug 31, 1990, 11:00:47 AM8/31/90
to

.... The complexities have to be present for the occasional sophisticated

customer, but the simple customer shouldn't have to worry about them. The
Unix tty interface is quite complex, but most programs can ignore most
of it -- if they want to print an error message, they just say
"fprintf(stderr, ..." and it works. That's the way the network interface
should be too: some simple way to open a connection (I find it impossible
to comprehend why 4BSD doesn't have an open-connection-to-service-X-on-
machine-Y library function, given how stereotyped and how messy this job
is), read/write for i/o, close for shutdown (and ioctl for option setting
etc. for those who care). That's all most customers want.

I see where our viewpoints differ: I am selling applications to end-users,
and I intend to support them. Most of the end-users who use our Development
Kit for one-off or in-house applications are probably quite satisfied with
open/read/write/close. However, I am careful to advise any OEMs who develop
for resale to pay close attention to the flags and error codes...

Piercarlo Grandi

unread,
Sep 2, 1990, 2:45:24 PM9/2/90
to
On 31 Aug 90 19:48:37 GMT, skl...@ernie.Berkeley.EDU (Keith Sklower) said:

sklower> In article <1990Aug28.1...@zoo.toronto.edu>
sklower> he...@zoo.toronto.edu (Henry Spencer) writes:

spencer> The way to do i/o on Dennis's streams was with "read" and "write".
spencer> Network i/o, in general, looked *exactly* like local device i/o.

Note that with the most recent developments of UNIX (FSS) read() and
write() no longer mean much, except 'move data from/to the kernel'; they
have become just a way to add new system calls, using a file like style
of interface. The semantics of write() and read() have become very very
loose. What does read/write mean on /dev/proc? Something highly non
obvious.

spencer> This is the way it should be, unlike what both Berkeley and
spencer> AT&T have done (both have reluctantly conceded that most people
spencer> want to use "read" and "write" and have made that work, but
spencer> their hearts were clearly elsewhere).

And for good reason! read() and write() on file descriptors is not the
most amusing interface. It is much more convenient to have IPC style
access to files, e.g. Accent with its ports, than viceversa, unless we
are so wedded to UNIX that we cannot change (probably true, vide failure
of Accent to get accepted, and then success of Mach, which is a Unix
like imitation of it).

Frankly, the "everything is a file" way of achieving connectability is
not the best abstraction, because you want to connect to active
services, not just to passive storage. Straining a file model to include
"active" files is a bit inelegant.

"everything is a socket" or equivalently "everything is a process" (even
files!) are much better; "everything is a file system", the technology
used in recent research Unixes or PLAN/9 is a way to get as abstraction
a more active thing than a file, works better than "everything is a
file", yes, but requires a large suspension of disbelief, and assumes
that we really want the Unix directory tree model for all name service,
which may not be a terribly good choice either.

e.g. as in having an internet filesystem type, and doing
something like

fd = open("/dev/internet/lcs.mit.edu/echo.udp",...);
/*or*/ fd = open("/dev/internet/edu/mit/lcs/119.tcp",...);

sklower> I find this inaccurate, partronizing and tiresome. I have
sklower> worked around Berkeley since 1978 and although was not a member
sklower> of the actual unix group in 1982 while TCP was being
sklower> incorporated, attended their meetings and seminars.

Ahh. Incidentally, I would like to observe that I have *never* seen an
implementation of BSD sockets and TCP. The only ones that circulate are
extremely limited and rigid subsets. Since you have been around UCB for
so long, can you tell us why there is no (full) implementation of BSD
sockets? E.G. where are user domain type filedescriptors?

About compatibility between IPC and normal file I/O:

sklower> I assure you that it was the design goal then, that only
sklower> ``sophisticated process'' would need a more elaborate mechanism
sklower> to establish a network connection, but that once having
sklower> established one, that it should be usable as a completely
sklower> ordinary file descriptor by ``naive'' processes like the date
sklower> command, using read and write, and that the file descriptor
sklower> should be inherited by the normal unix means (fork & exec).

Uhm. Precisely. And 'wrappers', one of the features of BSD sockets, were
designed to make such compatibility absolute. Why nobody has ever
implemented them?

sklower> It sounds to me like Henry is attempting to rewrite history
sklower> (for his own possibly political motives).

I actually think that he may innocent, but he *is* attempting to rewrite
history, in a way different from what you say; on BSD it actually
happens that read/write on sockets is the same as on files, but because
they changed the specification of read/write on files.

The only difference with send/rcv is that sendmsg/rcvmsg allow
out-of-band, and passing file descriptors, which make no sense with
read/write to a file, but the distance from traditional read/write is
larget than that.
--
Piercarlo "Peter" Grandi | ARPA: pcg%uk.ac....@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@cs.aber.ac.uk

Lars Poulsen

unread,
Sep 4, 1990, 1:29:23 PM9/4/90
to
In article <1990Aug28.1...@zoo.toronto.edu>

he...@zoo.toronto.edu (Henry Spencer) writes:
>spencer> The way to do i/o on Dennis's streams was with "read" and "write".
>spencer> Network i/o, in general, looked *exactly* like local device i/o.
>
>spencer> This is the way it should be, unlike what both Berkeley and
>spencer> AT&T have done (both have reluctantly conceded that most people
>spencer> want to use "read" and "write" and have made that work, but
>spencer> their hearts were clearly elsewhere).

In article <PCG.90Se...@athene.cs.aber.ac.uk>


p...@cs.aber.ac.uk (Piercarlo Grandi) writes:
>And for good reason! read() and write() on file descriptors is not the

>most amusing interface. It is much more convenient to have ... [features]

The problem is larger than that. As commercial programmers have
often said, unix read() and write() is not a very good interface for
files, either. It is a good interface for SIMPLE TEXT files, and very
little else.

Having a guaranteed subset of compatible functionality has allowed
very productive use of the "building block" philosophy. Having a way to
specify a remote conenctions "as if" it were a filename would allow more
things to be prototyped with simple shell scripts and pipelines.

On the other hand, great flexibility leads to a loss of device
independence, witness VMS' proliferation of pseudo-device drivers as an
example of where the path of customized interfaces may take us.
--
/ Lars Poulsen, SMTS Software Engineer
CMC Rockwell la...@CMC.COM

Henry Spencer

unread,
Sep 4, 1990, 4:20:56 PM9/4/90
to
In article <38...@ucbvax.BERKELEY.EDU> skl...@ernie.Berkeley.EDU.UUCP (Keith Sklower) writes:
>>...the way it should be, unlike what both Berkeley and AT&T have done

>>(both have reluctantly conceded that most people want to use "read"
>>and "write" and have made that work, but their hearts were clearly
>>elsewhere).
>
>I find this inaccurate, partronizing and tiresome. I have worked around
>Berkeley since 1978 and although was not a member of the actual unix group
>in 1982 while TCP was being incorporated, attended their meetings and
>seminars.

I wasn't there; all I got to do was read the resulting documents. Some
of which come over with a very strong air of "well, if you want to do it
right, you will of course use our 57 new system calls, but we grudgingly
admit that read/write will work if you insist on being backward".


--
TCP/IP: handling tomorrow's loads today| Henry Spencer at U of Toronto Zoology

OSI: handling yesterday's loads someday| he...@zoo.toronto.edu utzoo!henry

Henry Spencer

unread,
Sep 4, 1990, 4:30:34 PM9/4/90
to
In article <900831150...@ftp.com> jb...@ftp.com writes:
> ...That's all most customers want.

>
>I see where our viewpoints differ: I am selling applications to end-users,
>and I intend to support them. Most of the end-users who use our Development
>Kit for one-off or in-house applications are probably quite satisfied with
>open/read/write/close. However, I am careful to advise any OEMs who develop
>for resale to pay close attention to the flags and error codes...

I was writing from the down-in-the-wires viewpoint, where *any* user process
is a customer. And on the whole, I remain unrepentant. :-) It should be
possible to use open/read/write/close, with (say) perror when something
goes wrong, without major problems, assuming that error-message-and-die
is a reasonable thing to do on failure. A requirement for fault tolerance
does require closer attention to error handling, as in non-networked code.
Also as in non-networked code, a requirement for carefully optimized use
of system resources requires attention to flags and details. And anyone
building production code should be aware of the grubby details, so that
he can recognize the one case in a hundred where some of them are relevant.

However, I continue to believe that what most applications want to see is
a reliable bidirectional pipe, perhaps set up in slightly odd ways but
thereafter used via read/write/close, with invisible recovery from transient
problems and a suitable errno value on hard failure.

The resemblance to a Unix file is not accidental. :-)

Barry Shein

unread,
Sep 5, 1990, 8:57:35 PM9/5/90
to

I think we're all looking at this in far too narrow a context. There
are many issues that are being hand-waved (note, anyone who assumes I
believe one way or the other on this issue please send me my decision
as I haven't received it yet.)

For example, when I address something that reaches out into, say,
network name space, where does my request go? Does it go to a
specialized driver that takes care of things like DNS resolution?
Does it get handed back up to a user-level process? What?

I still think the terminal example (open("/dev/tty/9600/even/noecho",...)
was a good one.

Someone sent me mail saying "oh, that's just ioctl()'s".

What magic! Sockets are bad, ioctls are good, such power in a name.

The real issue is:

A) Are we trying to simplify/generalize the *user*
interface so most any network/etc operation can be
specified in a string, wherever a file name is expected?

or

B) Are we trying to simplify/generalize the *programmer*
interface so they don't have to know about those nasty
socket calls?

Careful, I believe those two goals can be very much in conflict. I
also don't believe that everyone discussing this issue on this list
falls into the same camp. So we're having definite communications
problems, state your objectives!

-Barry Shein

Software Tool & Die | {xylogics,uunet}!world!bzs | b...@world.std.com
Purveyors to the Trade | Voice: 617-739-0202 | Login: 617-739-WRLD

Ken Sallenger

unread,
Sep 7, 1990, 5:06:24 PM9/7/90
to
In article <900831140...@snow-white.lanl.gov> cpw%snow-...@LANL.GOV (C. Philip Wood) writes:
=> > True enough, but surely the key words are `good' and `better'!
=>
=> And who was it that said:
=>
=> "'Better' is the enemy of 'good'."


The good is the enemy of the best.

Bill Wilson, circa 1938

Possibly quoting someone else...
Anyone know a definitive reference?
--
Ken Sallenger / k...@bigbird.csd.scarolina.edu / +1 803 777-9334
Computer Services Division / 1244 Blossom ST / Columbia, SC 29208

Piercarlo Grandi

unread,
Sep 12, 1990, 1:38:47 PM9/12/90
to
On 4 Sep 90 17:29:23 GMT, la...@spectrum.CMC.COM (Lars Poulsen) said:

lars> In article <1990Aug28.1...@zoo.toronto.edu>


lars> he...@zoo.toronto.edu (Henry Spencer) writes:

spencer> The way to do i/o on Dennis's streams was with "read" and

spencer> "write". Network i/o, in general, looked *exactly* like local
spencer> device i/o.

If you want to keep UNIX like semantics. Unfortunately you really want
to do typed data, if only to include passing file descriptors, and OOB
signaling, and these do not fit well with traditional UNIX style
interfaces.

spencer> This is the way it should be, unlike what both Berkeley and
spencer> AT&T have done (both have reluctantly conceded that most people
spencer> want to use "read" and "write" and have made that work, but
spencer> their hearts were clearly elsewhere).

lars> In article <PCG.90Se...@athene.cs.aber.ac.uk>
lars> p...@cs.aber.ac.uk (Piercarlo Grandi) writes:

pcg> And for good reason! read() and write() on file descriptors is not
pcg> the most amusing interface. It is much more convenient to have ...
pcg> [features]

Hey, I take exception to the [features] summary -- I was describing not a
set of extra features, but a completely different philosophy based on
communication with active entities represented by IPC ports. An
alternative to the UNIX style file descriptors to access passive files.
It can be as simple and terse as the current UNIX style.

lars> The problem is larger than that. As commercial programmers have
lars> often said, unix read() and write() is not a very good interface
lars> for files, either. It is a good interface for SIMPLE TEXT files,
lars> and very little else.

Uhm. Here we differ. The UNIX style of accessing file is excellent for
any type of file, because any type of file can be mapped onto untyped
byte arrays (a.k.a. virtual memory segments), by the use of suitable
library procedures. Unfortunately non storage like entities are not
easily mapped onto untyped byte arrays.

lars> Having a guaranteed subset of compatible functionality has allowed
lars> very productive use of the "building block" philosophy. Having a way to
lars> specify a remote conenctions "as if" it were a filename would allow more
lars> things to be prototyped with simple shell scripts and pipelines.

The problem is that modeling everything as a passive entitity is far less
flexible than modling everything as an active entity.

0 new messages