[comp.protocols.tcp-ip] Graziano's Streams Query

dcrocker@TWG.COM (Dave Crocker) (09/22/88)

Periodically, a query hits this list about Unix Stream and about the
vendors offering TCP implementations for it.  The latest, from
Marco Graziano of Olivetti, asked a broad range of pointed questions and
so, it seems to me, forms a good base for a response that might be of
general interest.

Virtual Device Drivers

Streams uses modules and message-passing, in the kernel, to implement
protocol layers.  (Unless streams is implemented in user space, there is
no such thing as a "streams program" such as FTP or Telnet.  Streams refers
only to the kernel-level protocol core, usually limited to data link, 
network and transport levels, although some higher-level protocols are done
inside streams.)

Communication between modules is STRICTLY message-based.  The only other
access to information, by a module, is to call kernel subroutines.  Modules
may either be single queue in and out (called a "module") or multiple
queues in or out (called "driver").

1.  Advantages:  Streams imposes a discipline which forces highly modular
protocol coding.  That can, of course, create terrible performance, but does
not need to.  The other side of this discipline is that it can allow the
construction of functions which can be mixed and matched.  It also means that
the task of creating protocols (i.e., in separate modules) can be partitioned
among different people easily.  

One of the emerging major benefits of Streams is its utility in a
multi-processor environment.  Properly implemented, streams modules may
operate in DIFFERENT address spaces.  The only shared memory that is needed
is for message-passing and data-buffers.

One of the subtle benefits is portability.  Highly integrated code has a
tendency to bury many of its operating-system-dependent assumptions.  Streams
makes the assumptions clearly defined.  To port streams code, you need only
to create a Streams environment.  (This may turn out to be roughly the same
effort as porting/hacking another implementation, but the process is far
more predicatable and you are left with all of the protocol code being shared
between the original and new implementation.

2.  Are modules truly independent?  Yes!  Except, of course, that they must
agree to the format and rules of messages that are passed.  That is, there
must be a well-defined semantic interface between modules.

3.  Out-of-band access to modules:  Modules are accessible as "devices".
For example, we supply /dev/arp.  You can open it and then do IOCTLs with it.

4.  Support of alternate transports:  You betcha!  TCP and UDP are supported
as co-equals.  Further, we have an OSI TP4. 

5.  NFS integration:  NFS is an example of code above transport level which
resides in the kernel and is part of the Streams environment.  It accesses
UDP via the standard Transport Provider Interface (TPI).  

Access from user programs is via the Transport Library Interfaces.  It gets you
across the user/kernel interface and is competitive with Berkeley sockets
in terms of its role in life.  TLI is user level.  TPI is kernel level.  In
effect, it does for kernel processes what TLI does for user processes.

(By the way, there is at least one NFS that does not use TPI, but I do not
know any other details of its implementation.)


System V implementations able to use BSD kernel functions?  Well, mostly
there is a pretty-good mapping of SVR3/Streams calls to the kernel onto
BSD kernel functions.

1.  Device drivers are, in fact, kept alive by a daemon, typically.  In the
TCP case, the daemon sets up the multiplexed set of stream modules and holds
the file descriptor.

2.  Global data structures:  Basically, these are a no-no.  The only
shared data that is allowed, other than what you would access through standard
kernel functions, is via message-passing.  Keeping stray shared data structures
around leads to tough questions about how the structure will be shared in
a multi-processor environment.  The safest way is to have a query-response
discipline between the module that owns the structure and any that need to
"read" it.

We have recently been embarrassed to find a couple of placess that we commit
this sin-of-sharing.  In both cases, the solutions are conceptually simple,
involve small programming effort, and will not have any performance impact.

3.  Message buffer passing:  This is done via pointer (message block) passing.
Actual data are not passed.  e.g., TCP has a header message block and points
to user data.  IP adds its header message block and points to the TCP mb.
ARP sets up the ethernet header mb and points to the IP mb.  ARP or the
device driver, when they are done with this chain -- Note that this is
a scatter/gather model, which can be quite nice, for some devices -- they
free it.  However, TCP holds on to the data block until it gets an ACK back
for the relevant sequence number.

4.  Kernel modifications required:  Should not be any!  You can hide quite
a bit of convenient non-discipline by adding things to the kernel, but it
is not in the spirit of Streams.  Sockets, in particular, can be emulated on
top of TLI (i.e., within each user's application) with a fair degree of
faithfulness and no meaningful performance impact.  The one exception to this
rosy picture is select(), which currently does not map to poll().  This
required a select() driver.  (In Streams, rather than part of the generic
operating system kernel.)

5.  Performance measurement:  Probably the best answer is to ask for the
next question.  There really are not any serious performance measurement or
instrumentation tools.  You can get information about buffer exhaustion and
can use the strace logging facility to record useful information, but that
seems to be about it.


Dave Crocker
VP, Engineering
The Wollongong Group

thadani@xyzzy.UUCP (Usenet Administration) (09/28/88)

In article <8809260302.AA11541@ucbvax.Berkeley.EDU> dcrocker@TWG.COM (Dave Crocker) writes:
>One of the emerging major benefits of Streams is its utility in a
>multi-processor environment.  Properly implemented, streams modules may
>operate in DIFFERENT address spaces.  The only shared memory that is needed
>is for message-passing and data-buffers.

    This suggests that a Streams module may be written in
    a special way to run in a multi-processor environment, whereas
    perhaps it is the implementation of the Streams facility that would 
    most require careful design for multi-processor operation.  The
    modules themselves should be oblivious to the nature of the cpu/s,
    especially if the advantages of standardization are to be maintained
    (given that the existing Streams specification from AT&T do not 
    allow for multi-processor operation).

    It would be interesting to know of approaches taken to implement
    Streams for multi-processor operation.

daveb@geaclib.UUCP (David Collier-Brown) (10/02/88)

In article <8809260302.AA11541@ucbvax.Berkeley.EDU> dcrocker@TWG.COM (Dave Crocker) writes:
| One of the emerging major benefits of Streams is its utility in a
| multi-processor environment.  Properly implemented, streams modules may
| operate in DIFFERENT address spaces.  The only shared memory that is needed
| is for message-passing and data-buffers.
 
From article <1247@xyzzy.UUCP>, by thadani@xyzzy.UUCP (Usenet Administration):
|      This suggests that a Streams module may be written in
|      a special way to run in a multi-processor environment, whereas
|      perhaps it is the implementation of the Streams facility that would 
|      most require careful design for multi-processor operation.  

  Well, you can have it either way.  If your suppliers' streams
modules are for a standard uniprocessor configuration, one can write
a special one to transfer data to a separate device.  This might be a
good way of bootstrapping a FEP (front-end processor).
  Conversely, your supplier may use a bootstrap of this sort to
develop a cross-machine streams facility.  It is not immediately
obvious whether such a facility would be transparent or visible to
the streams user, although it would almost definitely be visible to
the system administrator...



-- 
 David Collier-Brown.  | yunexus!lethe!dave
 Interleaf Canada Inc. |
 1550 Enterprise Rd.   | HE's so smart he's dumb.
 Mississauga, Ontario  |       --Joyce C-B