[mod.protocols.tcp-ip] NFS, small machines, and ...

root@suny-sb.CSNET.UUCP (12/29/86)

The size issue of NFS on "small" machines heard recently in this group 
does not wash - our implementation of NFS on the Commodore Amiga PC fits 
nicely in about 33K code (10K NFS, 7.6K RPC/XDR, 15K TCP/UDP/IP, .6K User
authentication) + network buffers (~32K max).  AmigaNFS runs
quite nicely on a 256K Amiga.

As to the issue of why NFS server is not done on PCs, it seems to be
more an issue of host filesystem performance & required functionality
than anything else.  Using the Amiga as an example, the problems with
implementing NFS server are:

	1.  AmigaDOS filesystem performance is only about 32K bytes/sec,
	    and cannot really adequately service more than one (any?)
	    user.

	2.  AmigaDOS does not support anything like file generation
	    numbers, needed for server crash recovery.  Remember that NFS
	    boils file/dir names down into a short hand description
	    called a filehandle.  To keep the server completely stateless, 
	    the process of transforming file/dir name -> filehandle must be 
	    reversible.

	3.  AmigaDOS uses object locking to refer to directories & files.
	    Since NFS is designed to be stateless (and indepotent), we
	    have no open/close calls to delimit the lifetime of a lock.
	    A partial fix can be had by saving complete path names and
	    running all service requests atomically (lock/examine/unlock)
	    but this implies server state.

I believe that comments (1) and (2) apply in principle to most small
systems available today.  

					Rick Spanbauer