[comp.os.research] Kernel Size

mason@polya.stanford.edu (Tony Mason) (03/04/88)

An interesting conjecture, that a small kernel equates with low efficiency,
and a large kernel equates with high efficiency.  From what I've seen of the
performance figures, this isn't truly supported.  For example, in V, the
kernel is "small" - 110-120K.  Yes, we have lots of user level processes -
on my workstation I'm running (in addition to the kernel) an interfaceserver,
which controls keyboard/mouse input, a sun120display process which controls
the screen, an execserver, which controlls all execution requests.  As well,
there is an exception server which handles faults on the processes running
on my workstation.

So, I too have 500K-800K of "system" type processes.  So what is the
difference?  Well, lets try this one on:  I'm working on the executive
server.  I want to start up a new one.  Well, in your huge kernel system I
reboot.  On mine, I have a program that I call which tells the executive
server to directly execute a new server.  Hence, in 5-10 seconds I can change
something for which you would still be trying to shut down.

Or display servers.  Or internet servers.  Why should I be required to have
all the tcp/ip code *inside* my kernel?  If it is in a separate process and I
don't want to use a telnet or ftp connection, I save that much space.

Then, there is one other significant advantage.  As any project becomes
large, it becomes more complex and difficult to maintain.  With a small
kernel it is easy to change, recompile and rerun.  I know.  I've worked on
both the V kernel and the Unix kernel.  I can rebuild a Unix kernel about
once per hour.  I can rebuild the V kernel about twice per hour. 

In our group, we can offer you virtually everything you can get under more
conventional "large kernel" operating systems.  Including solid portability.
We use two SUN systems for file server access.  We can shut down the file
systems, check them, and all the while the other resources those machines
offer are still available to the entire world.

Recently, we have been updating our work to use a new version of our internal
transport level protocol (VMTP --- c.f. RFC 1045.)   I've been working on the
Unix stuff for a month now.  I'm just starting to get something useful.  The
V system was modified over a period of a week to implement several levels of
changes.

Certainly, the tendency is for an O.S. to start small and grow larger as the
number of services it must offer grows.  Just because some are further down
that path, doesn't imply that path is the only, or best, path to follow.

Tony Mason
Distributed Systems Group
Stanford University
mason@pescadero.stanford.edu

P.S.  In the time I wrote this note I completely rebuilt, from scratch, the V
kernel.  Try that with UNIX.  A complete rebuild on my microvax takes hours.

preston@felix.uucp (Preston Bannister) (03/06/88)

>From article <4702@sdcsvax.UCSD.EDU>, by mason@polya.stanford.edu :

> An interesting conjecture, that a small kernel equates with low efficiency,
> and a large kernel equates with high efficiency.  From what I've seen of the
> performance figures, this isn't truly supported.  For example, in V, ...

I was hoping someone from the V group would respond :-)

As you'll notice, Martin (large kernels :-) and I both work at
FileNet.  We have somewhat different viewpoints...

FileNet uses a modified version of Unix v7 (they started 5 years ago)
with the primary differences being a distributed filesystem, diskless
workstation support and a (unique:-) form of shared libraries.

The product that FileNet sells is a "document image processing system".  
We use an optical disk jukebox to store _massive_ amounts of documents.
The system is completed with scanners, laser printers, workstations
with high resolution displays and a fair amount of software.

Martin works in the operating system group (he just sped up the
filesystem).  I work on a part of the "application" software
(user-interface and a bit below).

From the inside our system is made up of a number of services
(document, index, print, and others) and application software that
makes use of those services.  

Our entire system is in the continuous process of evolution.  Services
are revised, enhanced or replaced fairly frequently.  While the OS is
a relatively small part of the overall system, in practice it is the
part that the most difficult to change.  Probably because it is the
largest _single_ piece.

I have difficulty with the assertion large kernels are more efficient.
In the real :-) world, smaller programs are easier to understand as a
whole, and therefore tend to evolve more rapidly.  Even if there is
some low-level advantage to throwing everything in the kernel, a it
seems likely that algorithmic improvements in smaller more rapidly
evolving software would soon more than make up for the difference.

--
Preston L. Bannister
USENET	   :	hplabs!felix!preston
BIX	   :	plb
CompuServe :	71350,3505
GEnie      :	p.bannister