[comp.sys.hp] NFS Performance

jim@cs.strath.ac.uk (Jim Reid) (08/04/89)

In article <2980006@otter.hpl.hp.com> crd@otter.hpl.hp.com (Chris Dalton) writes:
>[ stuff about impossibly fast du over NFS ]
>
>> Another possibility is that looking at the data from the disk is a
>> significant part of the job, in which case it is possible to win by doing the
>> processing on a fast CPU.
>
>>> This is why it is generally better to do local I/O than use some network
>>> file system.
>
>Hmm.  What if ... using NFS made a difference to the amount read from
>the disk in any one request, compared with a local read?

It's a possibility, but it doesn't happen. The server will perform I/O
using its 'optimal' block size to service NFS requests, even when those
requests ask for a different amount of data to be read/written. NFS
requests end up going through the block buffer cache code on the
server. You can't seriously expect a server to read a file 1 byte at a
time because that is how the client was asking for the data.

>For example, if du via NFS caused 4K reads on the 300, and du on the 300
>caused 1K reads...

No. A remote du causes an NFS getattributes request to be made for each
file. When the server gets this request, it gets the inode number for
the file from the file handle passed in the request. It then looks for
the details about the inode. If the inode is found in the inode table
or in a block in the buffer cache, the request is answered immediately.
If not, the kernel has to work out where this inode is on disk, get a
buffer and queue a block I/O transfer to read the data. When the read
completes, the inode data can be extracted from the block and returned
to the user.

For a local du, the kernel is given (or works out from a pathname) an
inode number. It then looks for the inode in the inode table or in the
buffer cache. If not found, it works out where the inode is on disk,
gets a buffer and queues a block I/O transfer to read the data. This
is identical to the way described earlier.

What this means is that the fixed overheads of getting inode information
are the same for local and remote requests. Remote requests will take
longer because of of the encoding and decoding of NFS requests and of
getting stuff to and from the network. [If you are thinking 'what about
the overheads of name to inode number translation in the local case?',
think again. An NFS client has the same overheads. (Actually, they're
higher.) The client will already have had to perform this mapping to get
a file handle that it can pass in the NFS request. In order to do the
mapping, the NFS client will have made more NFS requests to the server.
(Another reason why a du of a remote file system should take a lot longer
than a local one.)]

		Jim