[comp.sys.sun] swapping over nfs

ktk@spam.istc.sri.com (Katy Kislitzin) (06/01/89)

I am in the proccess of converting my 4/260 from 3.2 (Sparc) to 4.0.3.  As
this is the fifth or so time I've installed 4.X, I got to wondering...
How does creating a few large swap files interact with the Berkely fast
file system ?  

Assumptions:

/export/swap is a seperate file system

All swap files are created at once, just after mkfs and before
there is any other filesystem activity

/export/swap contains *only* client swap files

Questions:

My understanding is that given this scenerio, all disk blocks are
allocated when the swap files are created; after that they are merely
modified.  So what function does the 10% minfree serve in this case?  Will
my performance degrade if I fill it up to 111% (as is the case on a 3/180
I configured a while ago)? I have not noticed any problems on the 3/180,
but not all the client partitions are in use...

More generally, it seems to me that the usual parameters newfs passes to
mkfs may not be optimal in the case of a few large files whose sizes never
change.  It seems like allocating few inodes, setting the block size to be
huge and setting minfree to be very low ( zero? ) would improve disk usage
and performance.  Has anyone played with this?

+++++++++

These sorts of considerations lead me to wonder about how the client
swapping and the server filesystem buffering interact.  

Do blocks of the client's swap file live in the server's buffer cache
until they expire and get written back to disk?

Is swap space allocated in such a way that there is a reasonably high
likelihood that requested pages will still be in the server's buffer cache
?  (assuming a reasonable amount of paging, like on a 4 M 3/50 )

if the above is true, it seems like one could improve clients' performance
simply by adding more memory to the server and allocating a large number
of filesystem buffers. comments?

Thanks for your time.  If this has been discussed in a previous sunspots
or if anyone has pointers to papers or SUN documentation on this topic, I
would be interested. 

--KT (ktk@spam.istc.sri.com)

mr@racal-itd.co.uk (Martin Reed) (06/07/89)

Our experience is that taking a /export/swap partition to 111% "full" is
fine, and I can't think of a good reason why not. I have noticed that swap
files have the sticky bit set. I know that SunOS4.0 kernel code does
something with this but haven't work out what, as yet.

More generally, it seems to me that very little research has been
performed by informed individuals on tuning the Berkeley Fast File System
in the modern environment (i.e. large disks, faster CPU and controllers).
Or, if they are, they're not publishing. Surely somebody out there wants
some glory! :-).

        Martin Reed, Racal Imaging Systems Ltd
+----------------------------------------------------------+
|uucp: mr@ritd.co.uk, uunet!ukc!ritd!mr, sunuk!ritd!mr     | `Just hold
|Global String: +44 256 469943   Fax: +44 256 471492       |  these two
|Paper: Rankine Road, Basingstoke, Hants, England, RG24 0NW|  wires...'
+----------------------------------------------------------+
#include <std_disclaimer.h>

zjat02@uunet.uu.net (Jon A. Tankersley) (06/24/89)

One thing I noticed is the fudge factor is kinda large for the swap
partitions (6%).  This is due to the default settings for the partition
with respect to inodes, etc.  Since /export/swap will usually have less
than 50 (or even 20) files and really doesn't need lost+found, wouldn't it
be better to change the newfs options(mkfs options) and trim the fat down
to 1% overhead?  I've been meaning to try this, as I am also running with
111% /export/swap, but haven't tried it yet.

Has anybody figured out why something tries to use /export/swap on the
server for more than nfs swap?  I get 'file system full' messages every
once in a while.  Dunno what vmunix is doing...
-tank-
#include <std/disclaimer.h>		/* nobody knows the trouble I .... */
tank@apctrc.trc.amoco.com    ..!uunet!apctrc!tank