[comp.sys.att] Questions about 22MHz-vs-MPE on the

flint@gistdev.UUCP (01/28/89)

The first thing I'd want to see is some SAR (System Activity Reports) on
their existing machine.  The most useful will be a "sar -u": The things to
look at specifically are the following:

1)  Are they really using all their CPU capacity now? (Look at %idle)
    If %idle is 50% or more all the time, extra processing capacity will
    probably not help them all that much, if any.
2)  If they are using all their CPU, was is it going toward?  (Look at the
    %wio column).  This is time spent waiting for i/o, and if that number
    is high, (more than 10 to 15% is high) or if "sar -d" shows a %busy
    figure more than about 50%, then your disk system is the bottleneck.
3)  They have a zillion users: You gotta look to see how much swapping they
    are doing: I couldn't remember this one, so I looked in the sysadm
    manual: It says to look at "sar -qw", look at %swpocc (swap occupied)
    for values greater than 5, and at the swap-out rate for values greater
    than 1.0.  A zillion users competing for the machine can kill it
    faster than anything.

They have a nice flowchart on P 6-12 in the sysadm guide that will do you
more good than what I'd probably be able to summarize here, but the bottom
line is that there are a lot of things that can cause poor performance besides
just the speed of the processor: If they are doing a lot of swapping, then
adding memory to the machine will do wonders, if the %wio is high then
adding another disk (a fast one: don't get some >50ms access time garbage,
get something no worse than 18-25ms) or maybe even just re-arranging the
filesystems between existing disks will help a lot.  (It is surprising how
many people put / and /usr on the same disk.  You might have some other
file system that is getting the most use too: you gotta look at the sar
info to know, and then balance it so that one disk isn't doing all the work.)
It is quite possible that the most effective way to spend their 10K will
be to start with $1K in labor performance tuning their existing system so
you know exactly what is causing their problem currently.

We have INFORMIX on a 400 here and it is a real dog, to the point that our
marketing people almost refuse to demo it there: our 600 beats the pants
off the 400 on that, but I think it is because of the faster disks & the
disk caching hardware in the 600 as much as anything.  (I've never actually
analyzed the two carefully to find out though.)  A database application
like INFORMIX is prone to using lots of memory and lots of disk resources,
so especially in this case I'd recommend looking at those two areas to
see if they are the problem or not.

Flint Pellett, Global Information Systems Technology, Inc.
1800 Woodfield Drive, Savoy, IL  61874     (217) 352-1165
INTERNET: flint%gistdev@uxc.cso.uiuc.edu
UUCP:     {uunet,pur-ee,convex}!uiucuxc!gistdev!flint