[comp.os.vms] Tuning VMS

SYSMGR@IPG.PH.KCL.AC.UK (07/04/88)

I've seen a couple of queries about VMS tuning recently, and thought I'd submit
my two pennies worth.

*** flame on ***
DEC are either unable to understand the excellent memory management facilities
that their programmers put into VMS, or want to sell as much memory as possible.
This is reflected in the way that they lead you to believe that the modified
and free page lists are wasted memory.
*** flame off ***
They are in fact CACHES of pages available for use IF NEEDED by another
process, but still containing valid data which can be faulted back into a
process's working set without incurring a disk I/O. AUTOGEN doesn't
understand this either.

Setting a large MPW_HILIMIT causes writing of modified pages to the page file
to be deferred quite a while after the pages are lost from a process working
set. This helps a lot if the process almost immediately wants the page back,
which it often does; you save write I/Os. Even more important, however,
is to have a large FREELIM rather than the derisory one that AUTOGEN gives you.
This allows all processes running on the system to share a pool of memory,
rather than locking large amounts into their own working set thereby causing
swap thrashing. A system with a small FREELIM will have the best thruput
under light load and/or an oversupply of memory, but will  degrade
catastrophically under overload conditions. One with a large FREELIM will
run far happily when overloaded, and the penalty at other times seems slight.
Large means aiming to have 20% of the available memory (ie that not permanently
hogged by VMS) on the modified and free lists.

If you adopt this philosophy, two other things are important. One is to
give your users small working-set quotas, small-to-medium quotas and
large-to-huge extents. This allows the real memory hogs to expand their working
sets as required if the system is quiet, such as out-of-hours, but ensures
that an attempt to  run them at times of peak loading does not cripple the
whole system. The other is to turn on voluntary decrementing (set the PFRATL
parameter nonzero, typically one-tenth of PFRATH). This is because very
few images require constant amounts of memory as they run; it is common
for a scientific program to appear to require an enormous amount during
its initialisation phase, which then reduces to a far smaller amount once
it settles down to a compute-bound loop. Often this behaviour is periodic.
One program I know of cycles between 80 and 10000 pages over a 20-CPU-minute
period, using the smaller amount of memory for 90% of the time; without
voluntary decrementing it would use 10000 pages of memory all the time.
Commercial programs can be very similar - a sort operation in particular
is likely to use far more memory than previous or subsequent sequential
processing.

Using this approach, I was able to support 20-up users on a 4Mbyte 780,
with response times to trivial requests (such as EDT keystrokes) remaining
perfectly tolerable. (The tense reflects the fact that we are now a 3-node
LAVC). Incidentally, CPU's are getting faster quicker than discs; I suspect
that this approach will work even better on the new VAXes (-3000 and -8000
systems) than on the older ones.

One caveat; tuning is application dependant, and I do not make any claim that
this approach  ALWAYS works best. I do think that it's always worth trying,
before buying your way out of a resource-limit problem. And one possible
problem; you may get trouble with bean-counters who can't or won't understand
why program X costs twice as much to run on-peak rather than off-peak. (You
simply have do develop a thick skin and point out that public transport and
electricity companies and even timeshare bureaux do just the same).


Nigel Arnot (Dept. of Physics, Kings College, the Strand, London WC2R 2LS, UK)

                 Janet: SYSMGR@UK.AC.KCL.PH.IPG
                 Arpa:  SYSMGR%UK.AC.KCL.PH.IPG@UKACRL.BITNET
                 UUCP:  SYSMGR%UK.AC.KCL.PH.IPG@UKC
 Bitnet/NetNorth/Earn:  SYSMGR@IPG.PH.KCL.AC.UK (OR) SYSMGR%IPG.PH.KCL@AC.UK
                 Phone: +44 1 836 6192