dsamperi@Citicorp.COM (Dominick Samperi) (02/26/91)
We have observed a significant performance hit when links are done over the network (via NFS). At first I thought this was due to the fact that we are linking against several large libraries. So I copied the libraries to a local disk, and was surprised to find that it made very little difference. Further investigation revealed that what made all of the difference was whether or not the user's home directory was local or NFS-mounted. More precisely, the performance hit resulted from the need to write the output executable file (about 10Megs) to an NFS-mounted directory. Modifying the makefile so that the output executable file is written to a local ("cache") partition removed the performance hit. Surely this must be a problem that others have encountered. Does anybody have a better solution? We are using SunOS 4.1. It appears that NFS works well when one is making many small requests, say, to fetch object files from a library over the network, or to include header files from NFS-mounted directories. But there is a significant hit when NFS is used to copy a large file (or create a large executable file). One observation that I cannot explain is the fact that large executables that are located in remote (NFS-mounted) directories seem to start up quickly (faster than a straight copy). Any ideas? Thanks for any feedback on this. -- Dominick Samperi -- Citicorp dsamperi@Citicorp.COM uunet!ccorp!dsamperi
jik@athena.mit.edu (Jonathan I. Kamens) (02/26/91)
(Note: The article to which I am replying was posted separately to the three newsgroups in my Newsgroups: line; the References: line of this message indicates the Message-IDs under which it was posted in those newsgroups.) It is likely that the reason linking goes slowly when creating an executable in an NFS filesystem is that the linker has to seek back and forth to various points in the file while linking. Because of that, it isn't just a matter of reading in the sequential blocks of a file or writing out the sequential blocks of a file -- the same blocks have to be read in over and over again each time the linker seeks to them. A possible work-around to avoid this problem is to create a symbolic link in the directory in which you are compiling to force the linking to take place in a local directory like /tmp or /usr/tmp (or just to specify such a directory when specifying the output file name to the linker), and then mv the file onto the NFS partition when it's done linking. You'll probably get a significant speed improvement that way. In fact, I just linked emacs (my emacs sources are on NFS) into a local file, and then did the same link in the emacs source directory. The output of /bin/time from the local link: 102.9 real 11.1 user 13.6 sys The output of /bin/time from the NFS link: 260.4 real 10.7 user 14.6 sys -- Jonathan Kamens USnail: MIT Project Athena 11 Ashford Terrace jik@Athena.MIT.EDU Allston, MA 02134 Office: 617-253-8085 Home: 617-782-0710
guy@auspex.auspex.com (Guy Harris) (02/28/91)
> It is likely that the reason linking goes slowly when creating an executable >in an NFS filesystem is that the linker has to seek back and forth to various >points in the file while linking. Because of that, it isn't just a matter of >reading in the sequential blocks of a file or writing out the sequential >blocks of a file -- the same blocks have to be read in over and over again >each time the linker seeks to them. Not once those blocks end up in the buffer cache/page pool of your system, if it buffers I/O for NFS, as most UNIX NFS client implementations do. Only one read from over the wire should be necessary, assuming the object files aren't changing out from under the linker (and if they are, you have worse problems than the performance of the link...), and assuming that you don't have to have the buffers/page frames containing data from the file used for other purposes (but then, that's true if the files are being read from a local file system as well). > A possible work-around to avoid this problem is to create a symbolic link >in the directory in which you are compiling to force the linking to take place >in a local directory like /tmp or /usr/tmp (or just to specify such a >directory when specifying the output file name to the linker), and then mv the >file onto the NFS partition when it's done linking. You'll probably get a >significant speed improvement that way. Which would indicate that the problem isn't one of *reading* files, it's one of *writing* them, if the only symlink is for the target of the link - it's still reading the input object files from the same NFS file system, it's just writing the output to a local file system. That would be quite believable, unless your server was doing "unsafe" asynchronous writes, or had some kind of Prestoserve-like "write buffer", or some other trick for speeding up synchronous NFS writes.