[net.lan] socket library under System V?

dfh@SCIRTP.UUCP (David F. Hinnant) (08/06/85)

  We've been talking with various Ethernet vendors (Interlan, CMC,
Excelan, etc.) concerning implementing Ethernet on our 80286 Multibus
box under UNIX System V.  Software-wise they'll give us a "socket
library", TCP/IP, a driver, and the BSD applications.  All the pieces
are there; it's the "socket library" that has me a bit confused.  4.2
BSD implements sockets in the kernel. I presume the socket library gives
the same functionality, but I don't see how.  Can anyone that has this
kind of library tell me what it actually does?

  Is there a public-domain socket-type library?

  We've heard a lot about SLIP, and we're thinking about puting SLIP on
another machine of ours.  Does SLIP use 4.2 type sockets?

				As always, thanks in advance.

-- 
				David Hinnant
				SCI Systems, Inc.
				{decvax, akgua}!mcnc!rti-sel!scirtp!dfh

martillo@csd2.UUCP (Joachim Martillo) (08/08/85)

/* csd2:net.lan / dfh@SCIRTP.UUCP (David F. Hinnant) / 12:05 pm  Aug  6, 1985 */

>  We've been talking with various Ethernet vendors (Interlan, CMC,
>Excelan, etc.) concerning implementing Ethernet on our 80286 Multibus
>box under UNIX System V.  Software-wise they'll give us a "socket
>library", TCP/IP, a driver, and the BSD applications.  All the pieces
>are there; it's the "socket library" that has me a bit confused.  4.2
>BSD implements sockets in the kernel. I presume the socket library gives
>the same functionality, but I don't see how.  Can anyone that has this
>kind of library tell me what it actually does?

There  is no need  to implement sockets in  the  kernel though this is
probably the  better  thing  to do.  Basically  all the driver does is
tell how to read  and write  the network interface  and possibly  send
control   information  via ioctl commands.   Presumably  a user  could
write to the /dev associated with  the ethernet interface directly via
read and  write system calls.  The library  will do this  for  you and
tag  on all the  appropriate headers and  what-not  that are currently
done in the 4.2BSD kernel.

Now why  is it   better to do  things  as are  done in  4.2  BSD?  The
Berkeley approach  makes it far easier to  approach  ipc in  a unified
fashion even though you have many domains and many  protocols and many
communications  devices  attacked to your machine.   In   4.2 you just
invoke socket,  bind, connect  (or listen)  for all  of  them.  With a
library approach  you  will have  to  rewrite your  library  and  then
probably relink your executables,  should you also  want to run   some
other protocol besides TCP/IP.  If you want to  do routing  you can do
this either  in  the application library or in  the  driver.   Neither
idea is good.

There are also lots of   minor  benefits from putting  all the  grunge
work    in the   kernel.     Port  allocation  is  simplified.  Binary
executables  tend  to  be smaller  (which can be  important since  ipc
programs tend to proliferate when it all becomes available).

Now why would you want to use the library/driver  approach?  Maybe you
are stuck with system 5 or perhaps  you are dealing with  some hostile
operating system supplier  which won't let you  see kernel source  but
will tell you how to write drivers.

robert@cheviot.uucp (Robert Stroud) (08/15/85)

David Hinnant (dfh@SCIRTP.UUCP) asked about library implementations
of the 4.2 socket interface.

Joachim Martillo (martillo@csd2.UUCP) replied and argued that the socket
interface gave a uniform approach to ipc whilst the library approach was
inflexible and inefficient because of all the protocol dependent code
which got linked into the user program. (See <3070002@csd2.UUCP> for the
original article).

I always thought that a library implementation of sockets simply mapped 
calls like socket, bind and send more or less directly into open,
ioctl and write. I don't see why you can't keep all the protocol dependent
code inside the kernel. Is it really that difficult to bend the socket
interface to fit the conventional device driver interface? If it is a
little awkward, then all the more reason to hide the grotty details in
a library, but why go to the trouble of introducing a new set of system
calls when the old ones are more or less adequate?? 

I'm not necessarily suggesting that the socket abstraction is a bad one, 
but does it have to be in the kernel? We all use the <stdio> library
and that's not part of the kernel...!

Please don't flame me about this - it's a serious question and I would
appreciate some discussion of the issues involved. It has been suggested that
the 8th Edition concept of a Stream can be used to implement sockets, 
presumably through the ordinary open/read/write/ioctl special device 
interface. Would anyone care to expand on this?

One of the systems I use, (a Perq running PNX), provides both a datagram
and transport service on an Ethernet in a conventional way without sockets
so it can be done!

Robert Stroud,
Computing Laboratory,
University of Newcastle upon Tyne.

ARPA robert%cheviot.newcastle@ucl-cs.ARPA
UUCP ...!ukc!cheviot!robert
JANET robert@uk.ac.newcastle.cheviot (or robert@neda)

tcs@usna.UUCP (Terry Slattery <tcs@usna>) (08/18/85)

> I always thought that a library implementation of sockets simply mapped 
> calls like socket, bind and send more or less directly into open,
> ioctl and write. I don't see why you can't keep all the protocol dependent
> code inside the kernel. Is it really that difficult to bend the socket
> interface to fit the conventional device driver interface?

I'm using the Excelan front-end protocol suite on their Unibus card
and it does exactly that.  The drivers do have to do some additional
work since they have to handle unexpected process deaths, timeouts
for the select() implementation, and byte swapping between host and interface.

The new protocol software is now in beta test and looks a lot better
than the previous in terms of macro support for common functions that
vary between unix implementations.  The rlogin and telnet daemons are
now handled on the board (look like a DH driver to the host) for
performance.  I have measured 60Kby per second between my PDP11
running the front end software and a vax 11/780 running link level
mode (Excelan 204 cards in each host).  This is strictly memory-to-memory
transfers though.  Your milage may vary with disks and cpu speed.

On the topic of a VMS implementation, 
they do offer VMS and RSX support now, but only for
telnet and ftp.  The wollongong package supposedly offers mail, but
from Bary's note on using the Software Tools package, you might
be better off going with the Excelan and the Software Tools mailer.
	-tcs
	Terry Slattery	  U.S. Naval Academy	301-267-4413
	ARPA: tcs@brl-bmd     UUCP: decvax!brl-bmd!usna!tcs

ps. I don't have any connections with Excelan other than using
their products.

smb@ulysses.UUCP (Steven Bellovin) (08/20/85)

> I always thought that a library implementation of sockets simply mapped 
> calls like socket, bind and send more or less directly into open,
> ioctl and write. I don't see why you can't keep all the protocol dependent
> code inside the kernel. Is it really that difficult to bend the socket
> interface to fit the conventional device driver interface? If it is a
> little awkward, then all the more reason to hide the grotty details in
> a library, but why go to the trouble of introducing a new set of system
> calls when the old ones are more or less adequate?? 
> 
> I'm not necessarily suggesting that the socket abstraction is a bad one, 
> but does it have to be in the kernel? We all use the <stdio> library
> and that's not part of the kernel...!
> 
> Please don't flame me about this - it's a serious question and I would
> appreciate some discussion of the issues involved. It has been suggested that
> the 8th Edition concept of a Stream can be used to implement sockets, 
> presumably through the ordinary open/read/write/ioctl special device 
> interface. Would anyone care to expand on this?

There are a few things to make clear first; I've heard an awful lot of
misconceptions about what sockets are, what streams are, etc.  A socket
is simply a new way to get a file descriptor from the kernel, as a handle
for some sort of I/O operation.  The conventional way -- opening a file --
has several disadvantages for network use, most notably that one name --
say, /dev/tcp -- must be multiplexed among many different users.  There are
assorted different ways of dealing with this, none of them particularly clean.
Among the techniques I've seen are having multiple file names, such as
/dev/tcp00 (used in UNET, 8th Edition UNIX); file names where the kernel
plays games to treat network files differently, and hence multiplexable
(3Bnet, BBN's 4.1bsd TCP/IP), having the kernel open a different file than
the one you asked for (System V Datakit), and having some file names point
to programs (v7 and 4.1bsd mpx files, some 8th Edition code).  All of these
mechanisms, with the possible exception of the 8th Edition funny file name
code, have serious disadvantages (efficiency, modularity, cleanliness, etc.)
Given that, adding a new system call is a clean and simple solution.

Some of the additional new system calls -- getsockname(), for example --
are a bit dubious; there seems to be little reason why they shouldn't
be ioctl calls.  Others -- bind, accept, connect, and especially listen --
have sufficiently new semantics that they are perhaps justifiable.  Things
like send, sendto, and sendmsg strike me as unnecessary goo -- granted, they
perform new functions (at least, in some of their aspects), but it isn't
clear to me that the functionality is that important most of the time; you
really end up with lots of feeping creaturism.

Streams are a different ballgame entirely.  To the user, a stream is a
kernel-level filter that can be used with any stream file descriptor; this
includes most character devices, pipes, and network devices and pseudo-
devices.  *It makes just as much sense to push a stream module -- say,
a tty line discipline -- on top of a socket-derived file descriptor as on
top of a real tty port that you opened* -- the two concepts are orthogonal.
It wouldn't be at all difficult to add streams to 4.2bsd; it just isn't
clear that it's worth my time.

There is one other aspect of streams that is worth mentioning:  streams
are a kernel implementation technique that simplify new sorts of inter-
connections.  That is, they present a uniform technique for passing input
to a wide variety of processing modules; if no demultiplexing is needed,
output is handled that way as well.  In effect, streams move the concept
of the pipeline into the kernel.  If the path between two modules is
essentially dedicated (say, the one between tcp and ip), there is much
less benefit to using these techniques; in structure, efficiency, and
ease of use, they're roughly equivalent to mbuf chains.

martillo@csd2.UUCP (Joachim Martillo) (08/21/85)

/* csd2:net.lan / robert@cheviot.uucp (Robert Stroud) /  9:37 am  Aug 15, 1985 */
>David Hinnant (dfh@SCIRTP.UUCP) asked about library implementations
>of the 4.2 socket interface.

>Joachim Martillo (martillo@csd2.UUCP) replied and argued that the socket
>interface gave a uniform approach to ipc whilst the library approach was
>inflexible and inefficient because of all the protocol dependent code
>which got linked into the user program. (See <3070002@csd2.UUCP> for the
>original article).

This was not the only  reason.  The inflexibility  arises because they
library is being  supplied for  a specific protocol  in this  case for
TCP/IP.  In the Berkeley universe a socket  is not simply  a construct
for TCP/IP communication but a generalized communication mechanism.  I
might not want to run TCP/IP but rather ChaosNet or something else.

Further,   I pointed out that    routing is  quite a  problem  for the
Library/Driver approach.   I also  see a lot  of problems with address
resolution.  I suspect  the library/driver  approach works best with a
small network where  all hosts are almost always  up, where routing is
static,  and  where address  resolution  is handled via  static tables
maintained in files on all hosts.

Even if such a setup is sufficient for the current needs of a site to
start, I suspect the users would eventually find this set-up limiting.

>I always thought that a library implementation of sockets simply mapped 
>calls like socket, bind and send more or less directly into open,
>ioctl and write. 

This is my impression as well.


>		  I don't see why you can't keep all the protocol dependent
>code inside the kernel. 

This is beyond the library/driver  approach and would  not be possible
for someone running Xenix on an AT because microsoft does not  provide
source,  but  for argument  assume the software   suppliers were nice,
friendly  people, consider  the pain    of opening  up  an  ether  net
connection to a  remote host  using TCP/IP  assuming all  the  virtual
circuit protocol will be handled in the kernel.

First  we  open  up /dev/ethernet for  reading  and  writing  and then
perform  necessary  ioctl's   to get  a unique virtual   circuit  port
allocated to our process.

If we want to communicate on a well-known port on the foreign machine,
we use a library routine to get the foreign host addr from the foreign
host name.  Now what do we do with this addr + port?  In 4.2 we  would
do  a connect but  here we  now have  to resolve the  foreign ethernet
address.  This is easy if we have static tables and our hardware never
breaks down.  Now we have to do some  routing calculations  if we have
any but the  simplest network.  This  could not be handled within  the
current formalism because  this  is a network  topological problem and
not a protocol problem.   Now  we  could put  some  address resolution
protocol routines in the kernel and  run routing  daemons but  then we
have begun to reinvent a large part of 4.2 ipc.

I suppose some fancy ioctl's could be invented to take care of getting
the proper address and  routing info to the network  protocol routines
in the kernel but this is not the normal use of ioctl which is used to
pass control info to the driver for talking to the hardware interface.
The address and routing data is not meant for the hardware.  I suppose
you could at this point invent a  bunch of protocol  pseudodevices but
this  strikes  me as much  more complicated than the current  Berkeley
socket interface.

Well, now after doing all the fancy ioctl's on /dev/ethernet  and on a
bunch of gross pseudodevices we are ready to write our first  message.
Now suppose  we are going to  use  this virtual  circuit   to set up a
telnet session.  Here  come another horde of  pseudodevices!   This is
all just too   complicated.  The formalism  of read/write/ioctl  which
works well for tty's, lp's, and disk controllers just  is not flexible
enough and was never meant  to handle devices like  networks which are
"open" on the other side.

>			 Is it really that difficult to bend the socket
>interface to fit the conventional device driver interface? If it is a
>little awkward, then all the more reason to hide the grotty details in
>a library, but why go to the trouble of introducing a new set of system
>calls when the old ones are more or less adequate?? 

The old closed system calls are not adequate for open systems.

>I'm not necessarily suggesting that the socket abstraction is a bad one, 
>but does it have to be in the kernel? We all use the <stdio> library
>and that's not part of the kernel...!

>Please don't flame me about this - it's a serious question and I would
>appreciate some discussion of the issues involved. It has been suggested that
>the 8th Edition concept of a Stream can be used to implement sockets, 
>presumably through the ordinary open/read/write/ioctl special device 
>interface. Would anyone care to expand on this?

I am not so sure the edition 8 formalism  is  all  that different  for
Berkeley's formalism.  Looking at  pg 1901 October  1984 ATTBLTJ, I am
not sure that there  are  not system  calls for talking  to proto/out,
proto/in modules which would perform the edition 8 stream version of a
connect.

If you  look  at   pg  1906  of  the same  article,    the  diagram is
suspiciously like the  Berkeley client/server model.  The user/process
is  the remote user   application talking to the  remote  pty.  The PT
looks  like  the server   which sends messages   to the local machines
client.  I think Ritchie may have generalized this so  that the server
can easily either be local or remote or divided.  Berkeley assumes the
server is  remote.   With black magic, you can  put the server on  the
local machine talking to the device driver and have the remote process
be  the  client.   The X  window   system   from   MIT   does this for
communication with a VS100.   The server might actually  be built into
the formalism in  some  basic sort  of way although  a built-in server
might not be a good idea  if it  is  too inflexible if  it forces more
context switches between user and kernel process.

>One of the systems I use, (a Perq running PNX), provides both a datagram
>and transport service on an Ethernet in a conventional way without sockets
>so it can be done!

But  I have  the  impression that   edition 8  takes    basically  the
equivalents of socket,  bind,  connect and listen  as fundamental  and
then built open and ioctl with some extra pseudodevices on top of this.

bruce@stride.UUCP (Bruce Robertson) (08/22/85)

Actually, it's not really that difficult to merge the entire Berkeley IPC
system into System V.  We at Stride Micro have done just that, and have
TCP/IP running on top of the Corvus Omninet.  The only thing that isn't
implemented is select(), which requires more thought.  select() isn't
really part of the IPC, but most of the BSD servers seem to use it.
-- 

	Bruce Robertson
	UUCP: cbosgd!utah-cs!utah-gr!stride!bruce

blc@nrcvax.UUCP (Bruce Carneal x313) (08/22/85)

[]

NRC FUSION implements a superset of the BSD4.2 socket abstraction
mapping bind(), connect(), accept(), and friends into ioctl() calls.
Socket() requires use of an open() call as well as an ioctl()
and returns a character device file descriptor clothed with the
indicated protocol.  The other library routines do little more than
marshall parameters and call ioctl().

This approach is adequate for all current FUSION UNIX ports
including v7, SysIII, SysV, BSD4.X, Xenix, Venix, Ultrix, and UTS.
("From now on, consider it standard". :-))

FUSION uses a similar approach under VMS and MS/DOS.  Only the OS
escapes/entrypoints change.

I suspect that anyone not having access to or inclination to change
kernel sources will have followed a similar ioctl() approach.

If you need or want further information get in touch.

Trademark credits: Ultrix, Xenix?, Venix, UTS?, FUSION and UNIX to
Digital Equipment, Microsoft, Venturecom, Amdahl, Network Research
and ATTIS? respectively.

UUCP:	{sdcsvax,hplabs}!sdcrdcf!psivax!nrcvax!blc
	ucbvax!calma!nrcvax!blc

I speak for myself alone.