[comp.sys.hp] minfree, inodes

paul@eye.com (Paul B. Booth) (02/09/91)

Ho netters-

I'd really be interested to know how y'all feel about settings for minfree
and bytes-per-inode when making new file systems under hp-ux.  For a long
time now, I've been setting up filesystem with "newfs -i 8192 -m 5" to get
a bit of extra space out of the disks (on root systems, I don't use -m 5).
I figure that in most file systems, you always have way too many inodes, and
that on larger disks (>300Mb), there's no need to reserve 30Mb for system use.
Am I crazy?  Am I fragmenting my disks too much with -m 5?  Can I use less
bytes-per-inode?  Any other tuning ideas out there?

I'd really be interested in people's opinions on this, and will gladly post
a summary of email responses.

Thanks

--
Paul B. Booth  (paul@eye.com) (...!hplabs!hpfcla!eye!paul)
-------------------------------------------------------------------------------
3D/EYE, Inc., 2359 N. Triphammer Rd., Ithaca, NY  14850    voice: (607)257-1381
                                                             fax: (607)257-7335

rocky@hpfcmgw.HP.COM (Rocky Craig) (02/12/91)

> I've been setting up filesystem with "newfs -i 8192 -m 5"....Am I crazy?

Well, check rec.shrink for a second opinion:-)  My group stages tradeshows 
for HP, so we create lots of new filesystems on a regular basis.
We have "evolved" into using the values you quoted for a variety of
technical and empirical reasons.

1. Using the default values for inodes (1024 bytes/inode) does waste a lot
   of space.  Execute "bdf -i" on a "mature" file system using that bytes/node
   value and compare inode % free vs. file system % free.  8192 has been 
   a safe choice.  16384 starts getting into the realm of discomfort, and
   doesn't buy you that much more over 8192.
2. We use "-m 5".  Why 5?  Because it works and hasn't bitten us yet :-)
   Back in the days of 50M disks, 10% was not very much space.  Today's
   500M (and more) disks can waste a lot more space.  The minfree is supposed
   to be for root recovery space AND performance, but when we use
   "-m 5" we see no NOTICEABLE performance degradation.  And in a trade show,
   you know somebody is watching your performance CLOSELY.  We'd rather have 
   the space for one or two extra demos sets.

   If you're only concerned about having enough space for root to clean up
   things (i.e., leave 5 meg regardless), scale your fudge space against 
   your disk capacity.

BTW, this discussion made the rounds internally at HP a few months back.
It looked remarkably like your standard "vi" vs. "emacs" wars :-)  The
bottom line: if it works, use it.

Rocky Craig			
rocky%hpfcmr@hplabs.hp.com

This article does not represent the official position of the Hewlett-Packard 
Company.  The above data is provided for informational purposes only.  It is 
supplied without warranty of any kind.  

hurf@batcomputer.tn.cornell.edu (Hurf Sheldon) (02/12/91)

In article <17780008@hpfcmgw.HP.COM> rocky@hpfcmgw.HP.COM (Rocky Craig) writes:
>> I've been setting up filesystem with "newfs -i 8192 -m 5"....Am I crazy?
>
>Well, check rec.shrink for a second opinion:-)  

You have to be crazy to worry about this stuff or it would make you nuts.

>My group stages tradeshows 
>for HP, so we create lots of new filesystems on a regular basis.
>We have "evolved" into using the values you quoted for a variety of
>technical and empirical reasons.
.
.
.
All said before is correct except when in a heavily NFS
dependent environment. It appears that when the NFS server goes
below 20% free space and there are heirarchical read/writes (someone
compiling in a server directory on a client) performance drops like a stone. 
Read only server situations don't exhibit this behavior as 
badly but a client generated find on a 90% full NFS disk will go much
slower than on an 80% full disk. (I don't know why - perhaps fragmentation
causing more disk accesses being queued )

The large number of default inodes are long
term insurance as well as historically based from the days when
a 50mb disc seemed like infinite space- (remember?) as the disc
became full it fragmented a lot and every frag needed an inode and
besides, who could conceive of having files that were bigger than 100k?
Many database applications are inode pigs as well, having beaucoup
3k entries. - Anyway a heavily written to disc seems to need >5%
free space to keep performance up but when you get up to 600+mb
discs 60mb seems a bit excessive.

From a sytem management point of view, because you can change minfree
at any time, using 10% initially leaves you with some emergency 
reserve when it becomes critical. (always at 5pm friday)

Inodes are an irreversible (more or less) judgement call - something
I have ignored 'til now but looking at a 580mb partition 70% full
with 603 inodes used and 157094 (150k!) free it won't get
ignored again.
Thanks Paul!

Hurf




-- 
     Hurf Sheldon			 Network: hurf@theory.tn.cornell.edu
     Program of Computer Graphics	 Phone:   607 255 6713

     580 Eng. Theory Center, Cornell University, Ithaca, N.Y. 14853  

munir@hpfcmgw.HP.COM (Munir Mallal) (02/13/91)

   >500M (and more) disks can waste a lot more space.  The minfree is supposed
   >to be for root recovery space AND performance, but when we use
   >"-m 5" we see no NOTICEABLE performance degradation.  And in a trade show,
   >you know somebody is watching your performance CLOSELY.  We'd rather have 
   >the space for one or two extra demos sets.

I would add that since trade shows are usually short lived, your file system
is unlikely to become fragmented. The performance part of the 10% is intended
to allow the disk allocation routines a good chance of getting contiguous 
blocks for your files.  You should not see any degradation until your file
system is quite full and has been in use for some time.

Munir Mallal

Disclaimer:  The above is my opinion and does not represent endorsement 
             by Hewlett-Packard or anyone else.