[comp.unix.aux] NBUF and pstat

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) (01/16/91)

Currently, my kernel is built with NBUF being 0, meaning that 10% of
the free space at start-up is utilized for disk buffers. I want more.
The question is how many buffers are there???

I would guess, using pstat, that NBUF is actually set to 1551 since
that is the value that pstat returns for buffers.... Is this right?

(better to be safe...)
--
=======================================================================
#include <std/disclaimer.h>
                                 =:^)
           Jim Jagielski                    NASA/GSFC, Code 711.1
     jim@jagubox.gsfc.nasa.gov               Greenbelt, MD 20771

"Exploding is a perfectly normal medical phenomenon. In many fields of
 medicine nowadays, a dose of dynamite can do a world of good."

liam@cs.qmw.ac.uk (William Roberts;) (01/17/91)

In <2657@dftsrv.gsfc.nasa.gov> jim@jagubox.gsfc.nasa.gov (Jim Jagielski) 
writes:


>Currently, my kernel is built with NBUF being 0, meaning that 10% of
>the free space at start-up is utilized for disk buffers. I want more.
>The question is how many buffers are there???

>I would guess, using pstat, that NBUF is actually set to 1551 since
>that is the value that pstat returns for buffers.... Is this right?

I believe that you are right: certainly the comments in /usr/include/sys/var.h
seem to agree with you, as does examination of various different machines I
have to hand. On our machines we have SBUFSIZE set to 2048, NBUF set to zero, 
and we see

  4 Meg => 135 buffers
  5 Meg => 185 buffers   (extra 40 for 1 Meg extra memory)
  8 Meg => 339 buffers   (extra 204 for 4 Meg extra memory)

204*2K = 408K which is pretty much 10% of 4 megabytes. The 5 Meg machine is a 
Mac II not a IIcx and has one of the infamous old EtherPort II ethernet cards, 
so the kernel is going to be a bit different and occupy a different amount of 
space, hence slightly fewer extra buffers than you might expect.

Your figure of 1551 seems very high - let me guess: either you are running a 
32 Megabyte machine, or you have SBUFSIZE set to 1024 and you are running a 16 
Meg machine.

For networking stuff there is also NMBUFS, which we set to 500, and 9 
different NBLK* values associated with streams buffering, all of which are 
also types of "buffers" and affect the amount of memory you have available.
--

William Roberts                 ARPA: liam@cs.qmw.ac.uk
Queen Mary & Westfield College  UUCP: liam@qmw-cs.UUCP
Mile End Road                   AppleLink: UK0087
LONDON, E1 4NS, UK              Tel:  071-975 5250 (Fax: 081-980 6533)

jim@jagubox.gsfc.nasa.gov (Jim Jagielski) (01/17/91)

In article <2859@redstar.cs.qmw.ac.uk> liam@cs.qmw.ac.uk (William Roberts;) writes:
}In <2657@dftsrv.gsfc.nasa.gov> jim@jagubox.gsfc.nasa.gov (Jim Jagielski) 
}writes:
}
}
}>Currently, my kernel is built with NBUF being 0, meaning that 10% of
}>the free space at start-up is utilized for disk buffers. I want more.
}>The question is how many buffers are there???
}
}>I would guess, using pstat, that NBUF is actually set to 1551 since
}>that is the value that pstat returns for buffers.... Is this right?
}
}I believe that you are right: 
}

I am :):)

}
}Your figure of 1551 seems very high - let me guess: either you are running a 
}32 Megabyte machine, or you have SBUFSIZE set to 1024 and you are running a 16 
}Meg machine.
}

It's a 32 meg mac with SBUFSIZE 2048... right again!

Anyway, this all leads to an interesting question... certainly, as far as
disk buffers are concerned, there is a point of diminishing returns where
increasing the amount of buffers adds very little or even DECREASES performance
(possibly). Does anyone have any good system tuning information for A/UX...
25% memory for NBUF seems about right, but with large systems (32 megs) that
still leaves a good chunk of free memory... Of course, that isn't bad since
that means that swapping won't occur :)

--
=======================================================================
#include <std/disclaimer.h>
                                 =:^)
           Jim Jagielski                    NASA/GSFC, Code 711.1
     jim@jagubox.gsfc.nasa.gov               Greenbelt, MD 20771

"Exploding is a perfectly normal medical phenomenon. In many fields of
 medicine nowadays, a dose of dynamite can do a world of good."

sramtrc@windy.dsir.govt.nz (01/18/91)

In article <2676@dftsrv.gsfc.nasa.gov>, jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
> Anyway, this all leads to an interesting question... certainly, as far as
> disk buffers are concerned, there is a point of diminishing returns where
> increasing the amount of buffers adds very little or even DECREASES performance
> (possibly). Does anyone have any good system tuning information for A/UX...
> 25% memory for NBUF seems about right, but with large systems (32 megs) that
> still leaves a good chunk of free memory... Of course, that isn't bad since
> that means that swapping won't occur :)

As I understand it the bigger the disk cache, the better the performance
because the less the actual disk has to be accessed. Accessing RAM is faster
than accessing iron so the more there is in RAM the better. And the more
the NBUFS, the more the RAM available for caching. If you are doing program
development this is really useful because the compiler, the include files,
and all the tmp files stay in RAM and that's a lot of disk accesses that
are saved. There are still some disk accesses that are not "necessary"
because the ufs filesystem writes enough stuff to disk immediately to be
able to maintain filesystem consistency in case of a crash.

I'm not sure what happens in the event of a crash with a large cache. I
think the larger the cache the more data you lose. But definitely you do
lose data in any crash. How much depends on how long since the last sync.
I do kernel programming so I'm used to dealing with crashes so I'm in the
habit of doing syncs before running dodgy software. Especially more so now
that MacOS programs can crash the kernel. I include a sync in my makefiles
in case I forget.

Twice now I have rebooted after a crash to find my current work missing.
This is quite disconcerting because one never has backups of the past few
hours work. I don't know why this happens. Files never used to disappear
completely with the svfs filesystem. In both cases I was able to recover
my files from the raw disk and the recovered files seem to be the most
recent versions ie the data did make it to disk.

Tony Cooper
sramtrc@albert.dsir.govt.nz

ksand@Apple.COM (Kent Sandvik) (01/19/91)

In article <18804.2796de90@windy.dsir.govt.nz> sramtrc@albert.dsir.govt.nz writes:
>In article <2676@dftsrv.gsfc.nasa.gov>, jim@jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>> Anyway, this all leads to an interesting question... certainly, as far as
>> disk buffers are concerned, there is a point of diminishing returns where
>> increasing the amount of buffers adds very little or even DECREASES performance
>> (possibly). Does anyone have any good system tuning information for A/UX...
>> 25% memory for NBUF seems about right, but with large systems (32 megs) that
>> still leaves a good chunk of free memory... Of course, that isn't bad since
>> that means that swapping won't occur :)
>
>As I understand it the bigger the disk cache, the better the performance
>because the less the actual disk has to be accessed. Accessing RAM is faster
>than accessing iron so the more there is in RAM the better. And the more
>the NBUFS, the more the RAM available for caching. If you are doing program
>development this is really useful because the compiler, the include files,
>and all the tmp files stay in RAM and that's a lot of disk accesses that
>are saved. There are still some disk accesses that are not "necessary"
>because the ufs filesystem writes enough stuff to disk immediately to be
>able to maintain filesystem consistency in case of a crash.

This is true until you get to the point in the envelope where the 
amount of buffers residing in memory makes it hard to find more space,
so the system start paging, and ultimately swapping.

And swapping should be avoided, because swapping of large binaries takes
a long time.

This all is a big engineering science, and there's no simple rules. The
'10% of free memory' rule is an approximation. It all depends on the 
typical work load of the system, a lot of small binaries running, or 
a couple of big ones, how other resources are allocated, the speed of
the swap/page partition disk.

The best way it to do an empirical test, configure with various 
buffer sizes, do the same mix of applications as in real life, and 
do a test with for instance starting processes reading/writing to
disk with timing benchmarks running.

>I'm not sure what happens in the event of a crash with a large cache. I
>think the larger the cache the more data you lose. But definitely you do
>lose data in any crash. How much depends on how long since the last sync.
>I do kernel programming so I'm used to dealing with crashes so I'm in the
>habit of doing syncs before running dodgy software. Especially more so now
>that MacOS programs can crash the kernel. I include a sync in my makefiles
>in case I forget.

sync;sync;sync;  -). The system automagically syncs, don't know the
exact timing, but a typical sync interval for an UNIX system is about 
20 seconds. Some systems can be tunable concerning this interval as 
well (I know, A/UX does not have this, I sent an RFC for this long time
ago).

I put together an HyperCard stack concerning A/UX tuning some time ago
(for the A/UX sales support people in Australia). If there's interest
I could revitalize some of the information and republish it. 
UNIX systems tuning is a black art, and with the advent of systems
such as SysV.4 with dynamically allocated resource tables we should
maybe get rid of that nuisance.

regards,
Kent Sandvik



-- 
Kent Sandvik, Apple Computer Inc, Developer Technical Support
NET:ksand@apple.com, AppleLink: KSAND  DISCLAIMER: Private mumbo-jumbo
Zippy++ says: "Operator overloading is pretty useful for April 1st jokes"

liam@cs.qmw.ac.uk (William Roberts;) (01/21/91)

In <48252@apple.Apple.COM> ksand@Apple.COM (Kent Sandvik) writes:

>In article <18804.2796de90@windy.dsir.govt.nz> sramtrc@albert.dsir.govt.nz 
writes:
>This is true until you get to the point in the envelope where the 
>amount of buffers residing in memory makes it hard to find more space,
>so the system start paging, and ultimately swapping.
>And swapping should be avoided, because swapping of large binaries takes
>a long time.

Under BSD-influenced systems (including A/UX) binaries do NOT swap: the text 
segments are shareable and read-only, so the system just pulls them back in 
from the original binary. This is the reason for the "Text file busy" message; 
it would be embarassing if you changed the file from which executable pages 
get pulled back. For example:

  % cp /bin/dd /tmp/dd
  % /tmp/dd bs=10k &
  % date >/tmp/dd
  /tmp/dd: Text file busy
  %

Other things which no longer swap include the u area: this is permanently 
resident in memory under A/UX (so I'm told).

>I put together an HyperCard stack concerning A/UX tuning some time ago
>(for the A/UX sales support people in Australia). If there's interest
>I could revitalize some of the information and republish it. 

Yes please - this kind of stuff is always interesting.

>UNIX systems tuning is a black art, and with the advent of systems
>such as SysV.4 with dynamically allocated resource tables we should
>maybe get rid of that nuisance.

It would be better to get the SunOS 4.x virtual memory system, which gets rid 
of the statically allocated disk cache entirely. What effectively happens is 
that VM and Disk cache contend for the whole of the available memory, and you 
really do get compiler intermediate files never touching the disk, but without 
the overhead of a large proportion of your physical memory stolen permanently 
for disk. They make even more use of this with a trick called the "tmp 
filesystem" which resides solely in virtual memory and avoids even 
inode/directory updates going to disk: perfect for compiler intermediate files 
and other junk you don't want to keep.

As far as I understand things, this isn't part of SysV.4: but rather one of 
the "transparent extras on top of the standard" which is what product 
differentiation is all about these days.
--

William Roberts                 ARPA: liam@cs.qmw.ac.uk
Queen Mary & Westfield College  UUCP: liam@qmw-cs.UUCP
Mile End Road                   AppleLink: UK0087
LONDON, E1 4NS, UK              Tel:  071-975 5250 (Fax: 081-980 6533)

alexis@panix.uucp (Alexis Rosen) (02/05/91)

Jim Jagielski wrote:
>Anyway, this all leads to an interesting question... certainly, as far as
>disk buffers are concerned, there is a point of diminishing returns where
>increasing the amount of buffers adds very little or even DECREASES performance
>(possibly). Does anyone have any good system tuning information for A/UX...
>25% memory for NBUF seems about right, but with large systems (32 megs) that
>still leaves a good chunk of free memory... Of course, that isn't bad since
>that means that swapping won't occur :)

Not a bad guess. When I did the MacUser A/UX review, I guessed that 10% was
a bad idea for macs with 8MB+ RAM. We did a bunch of tests and sure enough,
25% was faster each time. Problem was, the speed differences were marginal-
a few percent. It wasn't worth it for the few times when the extra 1MB of
free RAM was going to be _really_ missed, slowing down the Mac alot.

Somewhere between 15 and 20% might be a better default.

On the other hand, all these figures came from "average" tasks. Individual
habits can vary so wildly that you really have to figure out what _your_
needs are and set NBUF based on that. BTW, for the last bit of power on
all-FFS systems, set SBIFSIZE to 4096 and NBUF to NBUF/2.

Does anyone know why certain options for pstat don't seem to do anything?
I think "-p" was one...

---
Alexis Rosen
Owner/Sysadmin, PANIX Public Access Unix, NY
{cmcl2,apple}!panix!alexis