[comp.unix.xenix] Streams development tools

blc@mentat.UUCP (Bruce Carneal) (09/23/87)

I've been watching the "streams" discussion for the last little while
with various people making statements about how easy/difficult it is
to do streams development.  The nicest way we've found to do streams
development is outside the kernel.  We implemented this in four parts:

	1)  A kernel resident streams module that registers processes to
	    receive messages indicating open() requests on previously
	    unavailable modules/muxes/devices.  This involves a module
	    that overwrites dummy entries in the fmodsw or cdevsw.

	2)  A splice module to reroute traffic to/from kernel based
	    place holder queues instantiated when one of these
	    registered modules/devices is opened. 

	3)  A streams environment that will run in the registering
	    processes.

	4)  A simple interface library to hide the details of registration
	    and message dispatch from the streams developer.  In our
	    implementation, when the routine xm_main() is called, all
	    non-conflicting devices/muxes listed in a process local
	    fmodsw/cdevsw table are registered with the kernel and a
	    data driven grand loop is entered.

Using these facilities you can add or delete streams modules/muxes/drivers
at will from a running kernel with minimal risk.  If a streams process
crashes, the splice module instantiation representing a module inside the
kernel will close.  If there are outstanding clients of the module/mux/driver
they can be signalled with an error, afterwhich the splice module
unregisters the module and you're ready to try again as soon as you
find the problem.

We call this collection of facilities XM (eXternalized Messages).
We use XM to develop all our non-hardware specific streams modules, from
simple monitors to full protocol stacks.  XM was moderately difficult
to get "right", but the payoff in development and debugging time is
substantial.  Streams programming is almost as easy now as simple application
programming.  To move modules in or out of the kernel without change, you *do*
have to follow the streams programmer guidelines given in the V.3 red books,
but that's really not too restrictive.  All of our non-hardware
modules/muxes/drivers run unchanged in either environment.

On the negative side, a message takes a performance hit of about 5X
during it's first transition from the kernel to a process and, optionally,
back again.  If multiple processes are involved, with no stream queue
neighbors being handled by the same process, a message will burn a lot
of time shuttling around between address spaces.

If you'd like the technical documentation on XM drop me a note.  If you don't
handle LaTeX I'll need your surface mail address.

PS:  For the masochists in the crowd, this stuff runs on 286 based
     X*nix as well as on V.3 machines.

-------------------------------------------------
	Bruce Carneal
mail:	Mentat Inc., 663 N. Las Posas #112, Camarillo CA 93010
phone:	805-987-3950
uucp:	...!uunet!mentat!blc

jhc@mtune.ATT.COM (Jonathan Clark) (09/23/87)

In article <123@mentat.UUCP> blc@mentat.UUCP () writes:
>I've been watching the "streams" discussion for the last little while
>with various people making statements about how easy/difficult it is
>to do streams development.  The nicest way we've found to do streams
>development is outside the kernel.

There's no reason that I can think of in 30 seconds apart from the need
to access I/O space that *forces* stream modules to run in the kernel
environment and address space. They would typically run *faster*
there, because the support environment was designed to be the kernel.

I can certainly foresee in the future having a version of UNIX which
runs in four rings: user, shared libraries, streams, and kernel. Five
rings if you put device drivers between streams and kernel. Six if you
stick the syscall interface somewhere appropriate. 7 and 8 anyone?

Conversely, running streams in user space would be useful mostly in
the debugging phase of a project. Once the stream modules work (a
Simple Matter of Programming, as Guy Harris said recently), you could
put them into the kernel and forget about them. Saves all that
tedious rebooting and disk rebuilding when your driver goes amok with
the kernel buffer pointers...
-- 
Jonathan Clark
[NAC,attmail]!mtune!jhc

The Englishman never enjoys himself except for some noble purpose.

djg@nscpdc.UUCP (09/24/87)

In article <1313@mtune.ATT.COM>, jhc@mtune.ATT.COM (Jonathan Clark) writes:
# In article <123@mentat.UUCP> blc@mentat.UUCP () writes:
# I've been watching the "streams" discussion for the last little while
# with various people making statements about how easy/difficult it is
# to do streams development.  The nicest way we've found to do streams
# development is outside the kernel.
> 
> There's no reason that I can think of in 30 seconds apart from the need
> to access I/O space that *forces* stream modules to run in the kernel
> environment and address space. They would typically run *faster*
> there, because the support environment was designed to be the kernel.
> 
> I can certainly foresee in the future having a version of UNIX which
> runs in four rings: user, shared libraries, streams, and kernel. Five
> rings if you put device drivers between streams and kernel. Six if you
> stick the syscall interface somewhere appropriate. 7 and 8 anyone?
> 
> Conversely, running streams in user space would be useful mostly in
> the debugging phase of a project. Once the stream modules work (a
> Simple Matter of Programming, as Guy Harris said recently), you could
> put them into the kernel and forget about them. Saves all that
> tedious rebooting and disk rebuilding when your driver goes amok with
> the kernel buffer pointers...
> -- 

On National Semiconductors ICM series we have had several apporoaches to
stream module testing. Back in the days of V.2.2 we tried the dynamic overlay
scheme (modifing fmodsw). The overlays could be built using the
standard COFF tools (defining "real" sections as "dummy" sections and
leaving relocation bits on). When we progressed to I/O proccessors we arranged
our inter-processor interface to be "stream" supporting. Thus we can
"push" modules onto a remote processor. The processors code can be
dynamicly downloaded and board crashes automaticly close streams. (You
can also force this). We thus have very quick turn-around from comipile
to test and have all the advantages of off-loading the code onto dedicated
processors. In Jonathan`s scheme we have put streams and drivers on seprate 
processors. (now for kernel and shared libraries!)