[comp.sys.sun] Has anyone tried the quota system?

celvin@EE.Surrey.Ac.UK (Chris Elvin) (06/27/90)

In article <9230@brazos.Rice.edu> andy@acorn.co.uk (Andy Ingle) writes:
>
>aucs!peter@cs.utexas.edu (Peter Steele) writes:
>
>>We want to enable the quota system for student accounts. Has anyone made
>>use of this system and can comment on how it affects performance?

We use quotas for all accounts and find no general performance loss *BUT*
when a user logs on, all mounted filesystems are checked for the user's
quota (a user may have space on more than one filesystem).  I have 10 file
servers with a total in execess of 8Gbytes of cross mounted disk on some
30 partitions.  Logging in may take several minutes on a heavily loaded or
slow machine.  This can be counteracted with hushlogin(5).

Chris Elvin
C.Elvin@EE.Surrey.Ac.UK        "what happens if I press this big red button"
Dept of Elec. Eng, University of Surrey, Guildford, Surrey, GU2 5XH. England

guy@uunet.uu.net (Guy Harris) (06/29/90)

>We use quotas for all accounts and find no general performance loss *BUT*
>when a user logs on, all mounted filesystems are checked for the user's
>quota (a user may have space on more than one filesystem).  I have 10 file
>servers with a total in execess of 8Gbytes of cross mounted disk on some
>30 partitions.  Logging in may take several minutes on a heavily loaded or
>slow machine.  This can be counteracted with hushlogin(5).

Or, if the problem is with NFS-mounted file systems, by mounting them on
the *clients* with the "noquota" option; this option obviously does *not*
turn off quota checking when creating extending or extending files (the
checks are done on the server, and you can't control that with a client
mount option), but *does* mean that the "/usr/ucb/quota" fired off by
"login" won't bother asking the remote "quota server" about those file
systems.

pcg@compsci.aberystwyth.ac.uk (Piercarlo Grandi) (07/01/90)

In article <9340@brazos.Rice.edu> celvin@EE.Surrey.Ac.UK (Chris Elvin) writes:

	[ ... about quota checking ... ]

   I have 10 file servers with a total in execess of 8Gbytes of cross
   mounted disk on some 30 partitions.  Logging in may take several
   minutes on a heavily loaded or slow machine.

Having too many mount points is really an exaggeration. I think that one
mount point per server is all that is needed. Each server should have a
directory with its name under which it mounts all filesystems that it must
export. The resulting tree is like:

		/server1/fs0/...
		/server1/fs1/...
		/server2/fsA/...
		/server3/fsX/...
		/server3/fsY/...
		/server3/fsZ/...

Where if you are on server1 you just mount your devices onto fs0 and fs1,
whereas you NFS mount the entire /server2 and /server3 from the other
servers.

This organization reduces dramatically the number of mounted filesystems,
which is *good* for many reasons (and helps prevent getcwd() lockups on
pre-4.1 systems, especially if combined with a shadow mount tree of
symbolic links). Hey, mount points *cost*.

Naturally the best solution is simply :-) to get rid of NFS altogether,
and alternatives are belatedly becoming available (Mach AFS Coda Sprite
...).

As to disabling quota reporting at login instead, not too difficult;
several alternatives come to mind:

a) Rename /usr/ucb/quota to e.g. /usr/ucb/quotas
b) Patch the /bin/login binary
c) Get one of many free login.c sources
d) Mount locally with the noquota option, the server checks.

I think easiest is a). d) is more 'elegant'...

Piercarlo "Peter" Grandi          | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth       | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK| INET: pcg@cs.aber.ac.uk

del@thrush.mlb.semi.harris.com (Don Lewis) (07/12/90)

In article <9538@brazos.Rice.edu> pcg@compsci.aberystwyth.ac.uk (Piercarlo Grandi) writes:
|Having too many mount points is really an exaggeration. I think that one
|mount point per server is all that is needed. Each server should have a
|directory with its name under which it mounts all filesystems that it must
|export. The resulting tree is like:
|
|		/server1/fs0/...
|		/server1/fs1/...
|		/server2/fsA/...
|		/server3/fsX/...
|		/server3/fsY/...
|		/server3/fsZ/...
|
|Where if you are on server1 you just mount your devices onto fs0 and fs1,
|whereas you NFS mount the entire /server2 and /server3 from the other
|servers.

Nope, sorry, this won't work, at least not on most machines (i.e. Suns).
Nfs mounts on the clients don't follow mounts on the server.  In your
example, on server1, the directories /server3/{fsX,fsY,fsZ} would all be
empty.  I do remember seeing a SunOS source patch at one time that makes
NFS server behave the way you describe.

|This organization reduces dramatically the number of mounted filesystems,
|which is *good* for many reasons (and helps prevent getcwd() lockups on
|pre-4.1 systems,

Not true in this case.  If /server2 and /server3 are mount points, then it
is real easy for something to hang because of stat()'ing these directories
when one of these servers is down.

Don "Truck" Lewis                      Harris Semiconductor
Internet:  del@mlb.semi.harris.com     PO Box 883   MS 62A-028
Phone:     (407) 729-5205              Melbourne, FL  32901