[comp.unix.wizards] How many packets per second from a Sun-3 file server?

roy@phri.UUCP (Roy Smith) (07/28/87)

	About how many packets per second should I resonably expect to get
from a Sun-3/180 file server?  Ours seem to max out at about 300 pps or so,
but I have no idea if that's a lot or not.  I've seen individual clients
peak at 5-600 or so.  One server has 4 Mbytes and one Eagle, the other has
8 Mbytes with two Eagles sharing a controller (although we currently have
the file system laid out such that most activity is on one drive; we're
working on fixing that).  "Traffic" shows packets to be about 60/30/10
nd/udp/tcp; presumably most of the udp is NFS.

	I realize there is no single good answer to my question, especially
since I havn't told you anything about our workload, etc, but I'm looking
for ballpark figures.  Is "a lot of packets" 200 per second?  500?  1000?
2000?  More?

	Likewise, disk transfers/second seems to struggle to reach 50.  Is
that "a lot"?  Clearly the fact that our 2-Eagle server doesn't do any
better in this department than our 1-Eagle one means that we're really
wasting that second disk arm.  But, like I said, we're working on fixing
that.
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

hedrick@topaz.rutgers.edu (Charles Hedrick) (08/06/87)

I haven't looked at packets per second from Sun file servers, but
I did do a bunch of experiments that are related.  These were
done using multiple 3/50's attacking file servers with scripts that
did I/O intensive activities.  (Nothing fancy: just shell scripts
that did copies, deletes, C compilations, etc.)  I came to the
following conclusions:
  - a 3/100 with one Eagle is limited by the disk subsystem, not
	by CPU or Ethernet.  When it is running at full speed, it
	is using about 2/3 of the CPU.  I don't recall the exact
	data rate we got, but I think it was about 47 transfers/sec,
	which is consistent with your results.
  - two Eagles with one controller gave only a few percent better
	results.  This is not surprising, as the controllers are
	currently operated in a mode that doesn't allow parallel
	operation of two disks.  (This is supposed to be fixed in
	4.0, though as I recall I heard that claim before of earlier
	releases.)
  - two Eagles with two controllers caused CPU to become the limiting
	factor.  We got about 1.5 of one disk, which is not
	surprising when you consider that one disk used 2/3 of the
	CPU.
  - we saw no difference between the original Eagle and 450
	controller and a super-Eagle with 451 controller.  However
	I find this very hard to believe, since our subjective
	impression based on multiuser performance is that the
	super-Eagle/451 combination is fairly quick.
  - we saw no improvement in going from a 3/100 with 4MB to a
	3/200 with 8MB, where one disk was in use.  Of course with
	two disks and two controllers, there would almost certainly
	be an improvement, since in that configuation a 3/100 runs
	out of CPU.
  - we saw no reason to believe that the Ethernet subsystem ever
	limited performance.

I found the most consistent measure of load was disk transfers/sec,
from iostat or vmstat.

By the way, we understand that some people are using 8MB machines as
file servers.  Does anyone have any evidence that adding memory helps
performance, for a machine used only as a file server?  When people
add memory, do they also retune the kernel so that it uses more
memory for block I/O buffers?  (If not, most of that extra memory
will be allocated for user processes, of which there are presumably
none.)

haahr@polygen.uucp (Paul Haahr) (08/12/87)

In article <13745@topaz.rutgers.edu>,
	hedrick@topaz.rutgers.edu (Charles Hedrick) writes:
...
>By the way, we understand that some people are using 8MB machines as
>file servers.  Does anyone have any evidence that adding memory helps
>performance, for a machine used only as a file server?  When people
>add memory, do they also retune the kernel so that it uses more
>memory for block I/O buffers?  (If not, most of that extra memory
>will be allocated for user processes, of which there are presumably
>none.)

With a sun-3/180 with one eagle and 12M acting almost entirely as an nd
(both boot and swap) server, the feel (sorry about no more quantitative
a result) of the machines (all 3/50s) running off of it improved a lot
after we increased the number of buffers from 10% of memory (the Berkeley
and Sun default) to 920 pages (920 pages * 8k/page ~ 60% * (12M - kernel))
Note that this machine does almost no NFS work; that is handled by a second
server.

We have not tried this yet for the NFS server.  I'm not sure what the
ramifications of tuning the server this way are since nfsd and biod are
"kernel processes that have user context." (paraphrased from nfssvc(2))
I would guess that the same tuning would have a similar effect, but haven't
had a chance to try it.

SunOS 4.0 and the new virtual memory subsystem (Gingell et al, Usenix 1987)
should improve the system, in that we won't need to set aside a specific
portion of our memory to be used for block i/o buffers.  One gripe that
will disappear when kernel buffers disappear is that as a non-source site
we can't tune this number except by using adb, and therefore the information
is not recorded in our /sys/conf files.
-- 
paul haahr				(bu-cs|princeton)!polygen!haahr
polygen corporation, waltham, ma	617-890-2888