fouts@infonode.ingr.com (Martin Fouts) (01/27/91)
Oh NO, not *this* argument again. To oversimplify the debate so far: The Pro Side: Lots of functionality in the kernel simplifies the programming interface to the file system with all of the related advantages. The Con Side: Simple file systems make for cleaner more reliable implementations which are more efficient for the majority of cases. OK, now for The Out Side: This argument confuses implementation with semantics. Consider: > Every time I say "it's easier and more efficient to put it in the kernel" > you say "but it's just as easy to put it in a library". When I say "it's > very inefficient to put it into a library as easy to use as the kernel" > you say "who said I was going to do it the easy way." I'm not > arguing that you *can't* put it in the user lib, only that it is an > order of magnitude harder and slower to put it in the user lib. You have > not given me any reason so far to doubt that. Try thinking this way: > It's already in the kernel. Why would you take it out and put > it into a user lib? Then maybe this kind of response would help > progress the discussion. :-) -- Darren 1) The "Kernel" doesn't have to be the kernel anymore. Microkernel (I prefer germ) implementations put a minimum amount of functionality into the microkernel and distributes the rest into the "kernel" and various servers. The only distinction which needs to exist between "kernel" and servers is the priviliges granted to the "kernel" because it executes in supervisor mode. 2) Most of the space inefficiencies in library implementations go away when you use shared libraries. It is in fact possible to share a library between "kernel" implementations, servers and application programs. 3) Layering can both improve efficiency (in space and time) and ease programming. (That is, after all, what "software engineering" is suppose to accomplish.) With respect to the file system discussion (tastes better, versus less filling) after having spent years doing system programming for CP-V, MVS, VMS, and Unix, as well as having implemented file systems in Unix, let me propose the following observations: When organizing an operating system, it is important to recognize that it is more difficult to debug code on the supervisor side of the interface than on the user side, so there is a strong motivation to move as much functionality across the interface as possible. Given that the operating system is intended to last a long time, possibly outlasting the class of machines it was originally intended for, most of the original design decisions will be changed many times over during the life of the OS. This means that any valid OS needs to be easy to modify. Forcing functionality to cross the interface, tends to lead to better modularity in the design, easing these transitions. A truely successful OS will run on a wide variety of machines, being used by a large number of customers for a lot of different applications. This implies that it should be easy to support multiple configurations. Given these considerations, the best know current technology is to organize the system so that the raw device support is in the "microkernel", a very primitive file system is available in the kernel, separate file system implementations exist in servers for various kinds of file systems, as appropriate for the applications currently being run. Finally, short circuiting and shared libraries are used to overcome most of the problems which are sited as reasons for kernel implementations.