Okuno@SUMEX-AIM.STANFORD.EDU (Hiroshi "Gitchang" Okuno) (12/17/86)
I am posting this request for my colleague. He would like to know the activity on Network File System, which is a very important application like Telnet, FTP, SMTP and Finger. He knows that Sun proposes their own NFS system and that it is a de facto standard. Any information on NFS (common or proper noun) is welcome. Thanks in advance. - Gitchang - -------
braden@ISI.EDU (Bob Braden) (12/18/86)
Gitchang, The problem of Internet standard(s) for network file systems has been receiving some attention, but probably less than it deserves. Within the formal Internet R&D structure, the issue falls within the scope of the End-to-End Protocols task force, which has been considering what steps need to be taken. Sun's NFS is a "defacto standard" (I dislike that term, which appears to be internally contradictory) for Unix systems. Internet protocols must be designed to handle the entire spectrum of operating systems in the world, not just Unix, and considerable work will be needed on NFS to generalize it outside the Unix world. It is unclear at this time whether that generalization will result in anything useful to either Unix or any other systems. In principle, there is a collaboration between Sun and the End-to-End Protocols Taskforce to pursue this question, but in practice little progress has been made. If there were a set of people who could say, "we have some knowledge and/or experience in the network file system area, and we want to devote some effort to the definition of an Internet standard network file system", things would happen a lot faster (with or without SUN's active participation). Bob Braden
rick@SEISMO.CSS.GOV.UUCP (12/19/86)
Sun's NFS is NOT a good Unix Networked Filesystem. They broke some Unix semantics in the name of generality to the non-unix world. Sun claims it to be a non-unix specific design. What do you see as the major problems? "Considerable work" doesn't sound right. It seems to run with MS-DOS and VMS as far as I know. So, it's not too Unix-specific. ---rick
braden@ISI.EDU (Bob Braden) (12/19/86)
Sun's NFS is NOT a good Unix Networked Filesystem. They broke some Unix semantics in the name of generality to the non-unix world. Sun claims it to be a non-unix specific design. Are we to read "not good" as "bad"? If not, what do you mean by this complaint? If so, why should we standardize a protocol which is bad for an important class of hosts? What do you see as the major problems? "Considerable work" doesn't sound right. The problem most people have cited is NFS' authentication/permission model, which is not only Unix-oriented but also perhaps inadequate. This is a hard and important issue. In fact, it has been pointed out that the assumption of globally-unique uid's and gid's is invalid in many sites even among Unix systems. Another problem is in the remote mount protocol. SUN treats it as separate, yet it seems that its functions ought to be included in any network file system standard. Another set of issues has to do with convincing ourselves that the NFS primitives have sufficient generality to provide useful service with the other file systems in the world besides Unix. That probably means generalizing the existing primitives and adding a few more. It also means providing defined hooks for extensibility. Finally, there is the issue of underlying layers. NFS assumes two other protocols, XDR and RPC. It seems desirable to define NFS independently of the lower layers, so different choices could be made in the future for these protocols (after all, that is what layering is really for). RPC, in particular, is highly doubtful in its present form as an Internet standard, as its transport-protocol mechanism seems deficient. It seems to run with MS-DOS and VMS as far as I know. So, it's not too Unix-specific. More information about the generality and completeness of these implementations would be interesting and useful. Could I do a remote mount from my SUN to our VMS machine, for example, and access any VMS file? Can the VMS machine get at any Unix file (subject to permissions)? How do permissions work? Finally, I don't know how much time you have spent on protocol committees, but every one of the existing Internet protocols represents several manyears (or more) of concentrated effort, spread out over 2-5 years. Bob Braden
hedrick@TOPAZ.RUTGERS.EDU.UUCP (12/19/86)
NFS has dealt with one set of machine dependencies: The RPC mechanism is well-defined, and works across machines. So you can presumably read directories and delete files. But once you want to get or put data, it assumes a Unix file model, in the sense that the file is assumed to be flat (no way to say get the Nth record, or retrieve based on a record key). Furthermore, no translation of data is defined. This shows up in the PC implementation. Unix uses LF as a line terminator. MS-DOS uses eitehr CR or CR-LF [I don't remember which]. So you can use your Unix directory to store you PC files, but if you then go to edit them from Unix, the line terminators will be odd. There is of course a utility to change formats. For many purposes PC-NFS is just fine. It lets you use Unix disks to augment your PC's disks, and the formats aren't different enough to cause real trouble. But you'd like to see a real machine-independent file system solve that problem. Unfortunately, it isn't clear to me how one would do it. That's presumably why by and large it isn't being done. You can't have the server just change line terminators, for several reasons: - binary files (e.g. executables) would likely get munged - you can't tell which files are text and which are binary - if you turn LF into CRLF, you change the number of bytes, and so random access by byte number isn't going to work I think NFS is useful across a reasonable set of operating systems. I'm glad Sun put the work they did into making it as machine-independent as they did. But I certainly don't think one could claim it to be perfect. By the way, several notes have talked about NFS "violating Unix semantics". The most common example is that file locking doesn't work across the network. It does now, in Sun release 3.2. I think it's unfair to compare the first release of NFS, which we used in production for 1.5 years across 2 different manufacturers' machines (Sun and Pyramid), with System V release 3's network file system, which still isn't very widely available. (The other major omission is that you can't use devices across the network. Just disk files. I'd certainly like to see that fixed, but I can't say that in practice it causes much of a problem. I'll be interested to see whether that is fixed in NFS by the time we have sVr3 in operation.)
rick@SEISMO.CSS.GOV.UUCP (12/19/86)
By the way, several notes have talked about NFS "violating Unix semantics". The most common example is that file locking doesn't work across the network. It does now, in Sun release 3.2. This is only partially true. The system V lockf() IS supported by a lock daemon. However it does NOT support the 4.2bsd flock() file locking. Now since everything we do here is 4.2bsd compatible, not system 5, I maintain that file locking still doesn't work. We bought a 4.2bsd compatible system from Sun. We don't care what System 5 features they add. It still doesn't do forced apppend writes, nor permit accessing devices. That is something a good UNIX nfs would do. Those are clear violations of Unix semantics and not fixed (nor planned to be fixed as I understand it) ---rick
tim@lll-crg.ARPA@hoptoad.UUCP (12/20/86)
I really question whether RPC can be considered OS independent. It is way too big for microcomputers. An acquaintance involved with NFS (who has never worked for Sun) gave me an estimate recently that a client side alone would take 100K of machine code on a Mac. I have enormous respect for Sun as both a hardware and a software developer. But the fact is that it just doesn't make sense to develop a supposedly machine-independent and OS-independent network file system on a single-OS network with very powerful machines. This has left them with an overly elaborate protocol which is not suited to microcomputers. PC-NFS is a client side only, and a hypothetical Mac NFS would be the same way. This gives even less functionality than good old FTP; you can't even transfer a file from a PC to a PC with it! Client-only implementations fall far short of the goals of the system. To be truly OS-independent and machine-independent, NFS would have to be redesigned from the ground up, and simulatenously developed on more than one machine and OS. -- Tim Maroney, Electronic Village Idiot {ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp) hoptoad!tim@lll-crg (arpa)
bzs@BU-CS.BU.EDU (Barry Shein) (12/20/86)
>I really question whether RPC can be considered OS independent. It is way >too big for microcomputers. An acquaintance involved with NFS (who has >never worked for Sun) gave me an estimate recently that a client side alone >would take 100K of machine code on a Mac. Two years ago most higher level languages other than BASIC were too big for micros. I think this problem will vanish shortly by itself. 100K? that doesn't sound very big. A $1K Atari/St has 1MB of memory. -Barry Shein, Boston University
tim@lll-crg.ARPA@hoptoad.UUCP (12/21/86)
Unfortunately, the current crop of 512K and 640K micros are not going to magically evaporate when the technology improves. IBM offers no upgrades, and Apple's are priced too high for large nets to make them on a general basis. We still have VAXen even though you can buy a better machine for under $30K these days, after all. Also remember that the 100K estimate was for a client RPC only. I don't know how much an NFS client itself would add, nor an RPC server and NFS server. A complete NFS implementation would surely strain even a one megabyte Mac+ or (hypothetical) MS/DOS version 5 machine. For instance, the Mac Programmer's Workshop (a Greenhills C compiler with a scaled-down but still powerful UNIX subset) requires over 800K. This means you couldn't have NFS installed during program development. -- Tim Maroney, Electronic Village Idiot {ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp) hoptoad!tim@lll-crg (arpa)
schoff@CSV.RPI.EDU.UUCP (12/21/86)
>I really question whether RPC can be considered OS independent. It is way >too big for microcomputers. An acquaintance involved with NFS (who has >never worked for Sun) gave me an estimate recently that a client side alone >would take 100K of machine code on a Mac. Sorry, The department of Computer Science at RPI's initial version of SUN RPC's (which included UDP/IP and a device driver) all fit in less than 64k. Besides its seems pretty hard to buy a machine with less than 512k these days (except maybe a CoCo). Martin Schoffstall
weltyc%cieunix@CSV.RPI.EDU (Christopher A. Welty) (12/21/86)
David: You say you did the VMS NFS (for TWG I assume). Does this work with 4.3 UNIX NFS? (I think Mt. Xinu makes it, but if there are others for 4.3 that it works for that's fine). When I spoke to sales at TWG they said they had no idea if it worked with anything but SUNs. -Chris weltyc@csv.rpi.edu
tim@lll-crg.ARPA@hoptoad.UUCP (12/22/86)
So if a complete RPC and NFS can be fit into 64K, why is PC-NFS client only? -- Tim Maroney, Electronic Village Idiot {ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp) hoptoad!tim@lll-crg (arpa)
grr@seismo.CSS.GOV@cbmvax.UUCP (George Robbins) (12/22/86)
In article <8612200757.AA28013@hoptoad.uucp> hoptoad!tim (Tim Maroney) writes: >I really question whether RPC can be considered OS independent. It is way >too big for microcomputers. An acquaintance involved with NFS (who has >never worked for Sun) gave me an estimate recently that a client side alone >-- >Tim Maroney, Electronic Village Idiot Times change - there is already Alpha test version of NFS for the Amiga by an outfit called Ameristar Technologies. You can assume that in a year or so, when the 1MB chips reach price parity, you're going to see a bunch of 4-16MB 'micro computers'...
schoff@CSV.RPI.EDU.UUCP (12/22/86)
I didn't say rpc/nfs i said udp/ip/rpc/device-driver fit in 64k. I think the reason for PC-NFS being client only is a matter of marketing, and getting a product to market in a certain amount of time. marty
ROMKEY@XX.LCS.MIT.EDU.UUCP (12/22/86)
> So if a complete RPC and NFS can be fit into 64K, why is PC-NFS client only?
Maybe SUN isn't interested in selling IBM PC's as file servers...
I suspect it would take over 64K (but under 64K code + 64K data) to do
both server and client NFS for the PC.
- john
-------
geof@decwrl.DEC.COM@apolling.UUCP (Geof Cooper) (12/22/86)
> So if a complete RPC and NFS can be fit into 64K, why is PC-NFS client only? > -- > Tim Maroney, Electronic Village Idiot > {ihnp4,sun,well,ptsfa,lll-crg,frog}!hoptoad!tim (uucp) > hoptoad!tim@lll-crg (arpa) >
jas@MONK.PROTEON.COM (John A. Shriver) (12/23/86)
Why no server NFS in PC-NFS? Because the MS-DOS file processor is not re-entrant, and is single- threaded. Any attempt to share the file processor between two processes ranges from hairy to dreadful. It can be done, but you have to monitor some undocumented "file system busy" or "bios busy" bit. Of course, some PC software vendors have done this, but they do use a bit more memory (ex: Vianet), and replace the file processor to do it. Maybe Sun will do it someday, but it will be hard work, and a memory pinch. Maybe just wait for "MS-DOS 5.0."
rhorn@seismo.CSS.GOV@infinet.UUCP (Rob Horn) (12/23/86)
I think that the reason PC-NFS is client only has much more to do with the extreme difficulty in setting up any kind of server under MSDOS than it has to do with the NFS protocol. MSDOS is just not suitable for multi-tasking. It only understands one process plus N interrupt service routines. Any server must act as an interrupt service routine, and be subject to the associated restrictions, if it is to coexist with other applications. This also explains why other vendors of ftp and telnet also provide client only versions for MSDOS.
sned@PEGASUS.SCRC.Symbolics.COM (Steven L. Sneddon) (12/24/86)
[Everything in here is an opinion (mine to be precise). There may also be some facts here (I hope so, otherwise I'd better look for a different line of work). Is this better, mtr?] Date: Sat 20 Dec 86 10:58:38-PST From: David L. Kashtan <KASHTAN@SRI-IU.ARPA> I am the person who did the VMS NFS implementation, so I think I am reasonably qualified to comment on NFS as it relates to non-homogeneous O/S environments: The VMS NFS implementation is a server-only NFS implementation. It uses the SUN User-Level UNIX NFS implementation and the 4.3BSD- based Eunice (in order to provide the necessary UNIX file-system semantics). Without Eunice this would have been a very major undertaking. I would most likely have had to re-implement a pretty good sized chunk of the Eunice file handling system in order to get NFS to work on VMS. So, in reality, the way to get an NFS up on VMS is to get VMS to pretend that it is UNIX. This is hardly something one would be happy about in a standard for non-homogeneous O/S environments. Agreed. [...] It is my feeling that the Lisp Machine NFILE (and its predecessor QFILE) remote file access protocols went much further in dealing with file access for MANY different types of operating systems and I am very disappointed that nobody even looked at them as examples when thinking about NFS. David ------- I have a problem with this sentence, even though I agree that QFILE and NFILE can support richer underlying filesystem models than NFS. My problem is that if one were to replace the part of NFS that does the File protocol with NFILE, you wouldn't get any better behaviour with regard to the need to implement UN*X pathname syntax on top of the foreign filesystem. Instead of NFS passing you a UN*X-style pathname string, you'd have NFILE passing you a UN*X-style pathname string, and you'd be no better off. As I see it, the problem is really the lack of support for anything but UN*X filesystem syntax in UN*X. Where the Lisp Machine systems differ is in their ability to accept a variety of pathname syntaxes, and to convert between them when necessary (such as when copying directory hierarchies), all the while sending a "string-for-host" in the syntax of the remote filesystem, rather than the syntax of the local filesystem. By the way, it's interesting that Lisp Machines, which were designed from the beginning to be used as workstations on a network, adopted the pathname syntax of host:string-for-host for 'open'. UN*X, which was designed as a self-contained system, has to indirectly chop a local pathname, in UN*X pathname syntax, into host and string-for-host via Special Files and Mount Tables. That the only thing you could pass to a UN*X 'open' is a UN*X pathname seems to me to be at the root of the problem. When I think about what it would take to change this, my head starts to hurt [I know my share about UN*X, too].
kre@seismo.CSS.GOV@munnari.UUCP (Robert Elz) (12/26/86)
In <861224142645.6.SNED@MEADOWLARK.SCRC.Symbolics.COM> sned@PEGASUS.SCRC.Symbolics.COM wrote a lot of nonsense, followed by one seemingly correct statement... > [I know my share about UN*X, too]. I'd say that's right - divide all knowledge about Unix by however many billions of people there are on this planet, and you learned the bit about how to spell it without violating the trade mark. > As I see it, the problem is really the lack > of support for anything but UN*X filesystem syntax in UN*X. Since Unix filename syntax is a sequence of chars terminated by a null (some systems have a maximum length, generally not less than about 1024 bytes), its hard to see how this is much of a problem. > By the way, it's interesting that Lisp Machines, which were designed > from the beginning to be used as workstations on a network, adopted the > pathname syntax of host:string-for-host for 'open'. UN*X, which was > designed as a self-contained system, has to indirectly chop a local > pathname, in UN*X pathname syntax, into host and string-for-host via > Special Files and Mount Tables. What a load of rubbish. There have been mny RFS's for Unix at various times. Many early ones adopted some form of "host:string" syntax for remote file names, they ALL died (or have been forgotten), because that's a REVOLTING method for naming remote files. That means that someone has to know that the files is remote to build that filename, and what's worse has to know which host the file lives on. Unix implementations don't use mount tables, etc, because that's the only way it can be done, or even because its the easiest way it can be done. Just the opposite, its MUCH easier on unix to implement a "host:string" syntax - an average unix kernel programmer could do one of those (clients) (given existing network code) in an easy afternoon. The mount table mechanism is used because it gives the right semantics - host names aren't built as a part of the syntax of a filename, they're derived by a level of indirection that makes it easy to alter the configuration (you can move local files to a remote host without having to change any uses of the filenames at all). I'm not going to comment on Lisp machines, as I've never used one. From all reports they have a lot of nice attributes, but if Symbolics standard of employees is confined to "we're right, nothing else comes close" parrots then I don't have much hope for their continued success. Robert Elz kre%munnari.oz@seismo.css.gov
Margulies@SAPSUCKER.SCRC.SYMBOLICS.COM.UUCP (12/26/86)
Date: Thu, 25 Dec 86 17:26:30 EST From: munnari!kre@seismo.CSS.GOV (Robert Elz) In <861224142645.6.SNED@MEADOWLARK.SCRC.Symbolics.COM> sned@PEGASUS.SCRC.Symbolics.COM wrote a lot of nonsense, followed by one seemingly correct statement... I'll leave it to steve to comment on the ad hominem nature of this little flame, save to point out that it is offensive in the extreme. Too bad we don't have politeness police. Slander and libel do exist, and electronic mail is no license for them. I'll concentrate on technical aspects. However, I will warn Mr. Elz of this: Steve worked on UN*X for quite a while in his career, and is eminently qualified to comment. > As I see it, the problem is really the lack > of support for anything but UN*X filesystem syntax in UN*X. Since Unix filename syntax is a sequence of chars terminated by a null (some systems have a maximum length, generally not less than about 1024 bytes), its hard to see how this is much of a problem. Consider TOPS-20 directory structure, VM/CMS mini-disks, file system that permit "/" characters in their filenames, or especially file systems (like VMS and VM/CMS) that have structured (non-byte stream) files. None of them map very quietly into a hierarchical set of directories separated by "/" characters, and there are more and harder where they came from. This is my-file-system-centrism. Why should a workstation impose a single model of a file system on all of the machines it talks to over the network? If I am a workstation users, and a user of a VMS system sends me mail with a pathname in it, why is it good that I have to know how to translate it into UN*X ese? The only good that I know is that it allows existing UN*X applications, born and bred in a homogeneous environment, to access files on foreign systems. I'm not opposed to this. Steve isn't opposed to this. Symbolics isn't opposed to this. We just note that it imposes some limitations, like the ones reported by Kasdan. > By the way, it's interesting that Lisp Machines, which were designed > from the beginning to be used as workstations on a network, adopted the > pathname syntax of host:string-for-host for 'open'. UN*X, which was > designed as a self-contained system, has to indirectly chop a local > pathname, in UN*X pathname syntax, into host and string-for-host via > Special Files and Mount Tables. What a load of rubbish. There have been mny RFS's for Unix at various times. Many early ones adopted some form of "host:string" syntax for remote file names, they ALL died (or have been forgotten), because that's a REVOLTING method for naming remote files. That means that someone has to know that the files is remote to build that filename, and what's worse has to know which host the file lives on. Unix implementations don't use mount tables, etc, because that's the only way it can be done, or even because its the easiest way it can be done. Just the opposite, its MUCH easier on unix to implement a "host:string" syntax - an average unix kernel programmer could do one of those (clients) (given existing network code) in an easy afternoon. The mount table mechanism is used because it gives the right semantics - host names aren't built as a part of the syntax of a filename, they're derived by a level of indirection that makes it easy to alter the configuration (you can move local files to a remote host without having to change any uses of the filenames at all). This paragraph neatly details my point above: mapping everything to UN*X syntax is a lot easier on UN*X applications than changing them all to handle a pathname representation designed to facilitate operations in a heterogeneous environment. As it happens, the Symbolics environment represents pathnames and file system operations in a way that is optimized to heterogeneous environments. That was a design goal of ours, it wasn't of UN*X. I'm not going to comment on Lisp machines, as I've never used one. From all reports they have a lot of nice attributes, but if Symbolics standard of employees is confined to "we're right, nothing else comes close" parrots then I don't have much hope for their continued success. Mr. Elz, please note the history of this conversation. Someone from DEC sent mail criticizing NFS for, in a manner of speaking, UN*X-centrism. Steve \DEFENDED/ NFS, pointing out that the problem was merely the small funnel of the pathname syntax (and the lack of semantics for structured files), and not a fundamental flaw in the protocol as compared to NFILE. I hardly call that corporate chauvinism. If you are going to run about tossing tomatoes like that one, best to be sure you read your from lines. Benson I. Margulies