[comp.sys.sgi] Power Series Iris as NFS file server?

slevy@s1.msi.umn.edu (Stuart Levy) (06/04/90)

We have a 4-processor Iris (240 GTX), typically used as a workstation,
and are tempted to attach a few GB of SMD disks and make it a file server too.
It might do NFS for a dozen-odd machines, mostly other Irises and Suns,
but just for user file access -- no paging/swapping or system binaries.

My question is, does anyone know how much we can expect NFS-serving to cut
interactive performance on the Iris?  Is anyone else already in this boat?
For that matter, can anyone compare Irises and Suns as to NFS performance?
(We're using a Sun-3/260 for the purpose now.)

Another question:  We tried having our Iris export its /usr partition,
mounting it from elsewhere and doing NFS file copies to measure the
bog factor.  Seemed not too bad.  BUT found that one Sun --
a Sparcstation running Sun 4.1 -- refused to mount the SGI's disk,
complaining "RPC program/version mismatch".  Other Suns (4.0.3) didn't mind,
and our 4.1 Sun happily mounts 4.0.3 NFS partitions.

This might be a better question to a Sun list, but... anyone know why a
SunOS 4.1 Sun might not mount an Irix 3.2.1 server?  (Rpcinfo notes that
Sun's mountd now offers both tcp & udp service while SGI's mountd is udp-only.
Also SGI sports mountd version 99 as well as version 1, wonder what that is?)

	Stuart Levy, Geometry Group, University of Minnesota
	slevy@geom.umn.edu, (612) 624-1867

ams@ACM.PRINCETON.EDU (06/04/90)

There is a problem with mounting Iris partitions via NFS on suns running
SunOS 4.1.  The source of the problem is an incompatability of version
numbers (in fact I believe it is the version difference in mount that 
you mentioned in your posting that is the source of the problem).  

I do not know of a work around at the moment, but the last time this
came up in a discussion the consensus was that only the version
numbers were a problem and that it was not more serious than that.

--ams

_________________________________________________________________________
 Andrew Simms, System Manager
 Program in Applied and Computational Mathematics
 Princeton University
 218 Fine Hall, Washington Road                      ams@acm.princeton.edu
 Princeton, NJ  08544
 609/258-5324
 609/258-1054 (fax)

mike@BRL.MIL (Mike Muuss) (06/05/90)

At BRL we have just started using some of our 4D/280 machines as NFS
file servers (in addition to being compute servers, which is what they
are for).  It works very well, and subjectively "feels" very fast
(for NFS).

Our configuration will have >35 clients (mainly 4D/240 machines) served
by about 6 servers (4D/280 and 4D/240 each with 8x1.2 Gbyte drives).
We run the clients in "dataless" mode, ie, no home directories on the
SCSI disk, only root, /tmp, swap, and /usr.  No backups on clients,
daily backups on all servers using DUMP/RESTORE, three full cycles of
dump tapes retained on each server (one offsite).

In practice, NFS performance is quite acceptable for most tasks.
A few data-intensive operations we perform by opening a window on the
server and running the command there.  The most notable example of this
is using AR to build a 2.5 Mbyte library -- it goes about 3X faster
when run on the server -vs- over NFS.  (The issue is NFS synchronous writes,
stdio buffersize selection, and AR stupidity about quantity of excess I/O).

I have no hard numbers on NFS performance;  I'm too busy (and sufficiently
happy) to bother with an NFSTONE test or equiv.
	Best,
	 -Mike