[comp.unix.admin] Help with newfs under SunOS 4.1.1: can't allocate sectors

tar@math.ksu.edu (Tim Ramsey) (06/28/91)

I'm trying to partition a 1.2G (CDC Wren VII 94601-12G) SCSI drive that's
hanging off of a Sun4/330 running SunOS 4.1.1.  My new /usr partition will
be 600M (977 cylinders, 1200 blocks/cylinders).  When I invoke newfs I
get the following warning:

hilbert# newfs -v sd0g:
mkfs  /dev/rsd0g 1172400 80 15 8192 1024 16 10 59 2048 t 0 0 8 7
Warning: inode blocks/cyl group (140) >= data blocks (75) in last
    cylinder group. This implies 1200 sector(s) cannot be allocated.
/dev/rsd0g:     1171200 sectors in 976 cylinders of 15 tracks, 80 sectors
        599.7MB in 61 cyl groups (16 c/g, 9.83MB/g, 4544 i/g)
super-block backups (for fsck -b #) at:
 [ ... ]

It looks like I have more inodes than file space in the last cylinder
group.  Why?  Is my partition sized oddly?  If so, how do I calculate a
"good" size that's near 600M?  The default number of bytes/inode seems
plenty low to me.

Also, the partition has 977 cylinders.  Why does mkfs report that there
are 976 cylinders?

Any explainations, advice, or pointers to documentation would be
appreciated.

--
Tim Ramsey/system administrator/tar@math.ksu.edu/(913) 532-6750/2-7004 (FAX)
Department of Mathematics, Kansas State University, Manhattan KS  66506-2602
I don't want freedom from.  I want freedom to.

torek@elf.ee.lbl.gov (Chris Torek) (06/28/91)

In article <1991Jun28.020854.16006@maverick.ksu.ksu.edu> tar@math.ksu.edu
(Tim Ramsey) writes:
>... running SunOS 4.1.1 ...
>hilbert# newfs -v sd0g:
>mkfs  /dev/rsd0g 1172400 80 15 8192 1024 16 10 59 2048 t 0 0 8 7
>Warning: inode blocks/cyl group (140) >= data blocks (75) in last
>    cylinder group. This implies 1200 sector(s) cannot be allocated.
>/dev/rsd0g:     1171200 sectors in 976 cylinders of 15 tracks, 80 sectors
>        599.7MB in 61 cyl groups (16 c/g, 9.83MB/g, 4544 i/g)

>It looks like I have more inodes than file space in the last cylinder
>group.  Why?

The 4.[23...]BSD `fast file system' works by dividing a `disk' (or
more precisely, a partition) into a set of `cylinder groups'.  Each
partition is a three-dimensional object with two major properties:
a seek delay and a rotational delay.

The object is a set of concentric cylinders.  Each cylinder is a stack
of rings.  Each ring is a disk track; the height of each stack is the
number of surfaces.  A seek delay occurs whenever you have to move
`disk heads' from one cylinder to another; a rotational delay occurs
whenever you go around a ring.  Note that there is, at least in most
versions of newfs, no head switch delay when moving up or down one
stack of rings (even though some drives do have such a delay).

Rotational delays are handled by a fancy block-within-cylinder-group
allocator.  The seek delays are minimised more simply:  The set of
cylinders is divided into some number of `groups', and all the
cylinders within one group are considered equal.

The division into cylinder groups is done quite simply:  Every $n$
cylinders are made into a group.  $n$ defaults to 16.  You are using
the default (the 7th parameter in the mkfs line above).  Each cylinder
group contains some data blocks, followed by one contiguous fixed-sized
chunk holding general information and inodes, followed by more data
blocks.  (The first or last set of data blocks may be empty.  The
summary and inode blocks are `in the middle' so that they can be
spiralled down the pack.  They hold a copy of the superblock as well.
In this manner, the superblock is replicated on every surface and in
enough places that destroying all copies is extremely unlikely.) The
contiguous fixed-size chunk is what I will call the `cylinder group
data'.

Now, 1172400 sectors with 80 sectors per track and 15 heads comes to
977 cylinders exactly.  (Ignore the fact that ZBR disks do not have a
fixed number of sectors per track; the fast file system requires a
fixed number, so you pick something that makes the numbers work out.)
With 977 cylinders and 16 cylinders per group, there are 61 whole
groups and exactly one cylinder (80*15 sectors) left over.

The fast file system can deal with a single `left over' partial
cylinder group, but only by, in effect, pretending that the data blocks
that do not exist in that group are already allocated, and/or moving
the cylinder group data to the front.  The cylinder group data *must*
be all there.  In this case, the cylinder group data occupies 1120K.
A single cylinder, 80*15 sectors, is only 600K---it is just not big
enough.  Two cylinders would be barely enough (you would get 1200-1120
or 80K of data space in such a cylinder group).

>Is my partition sized oddly?  If so, how do I calculate a "good" size
>that's near 600M?  The default number of bytes/inode seems plenty low
>to me.

You can change the number of cylinders per cylinder group, so that the
remainder is not 1, or you can change the size of the cylinder group data
by changing the number of inodes per cylinder group.  If you know the
`average' size of each file, you can select a less conservative figure
for bytes per inode.  I have found that 6-to-8-K is more typical: using
6144 bytes per inode we see things like file systems 90% full with 65%
inodes used.  News file systems are atypical; news files average much
smaller.  Your existing file systems are a good place to get estimates.

Each inode reserves 128 bytes, so going from 4544 inodes per group
(2048 bytes per inode) to 1600 inodes per group (6144 bytes per inode)
will reduce the cylinder group data size by (4544-1600)*128 bytes or
368K.  This still requires 752K per cylinder group, which still exceeds
one cylinder; however, even without changing the cylinder group size
you will see an extra 368K for each of the 61 cylinder groups, or an
additional 22.5 megabytes, which is fairly respectable, and which
dwarfs the 600K lost from ignoring the last cylinder.

>Also, the partition has 977 cylinders.  Why does mkfs report that there
>are 976 cylinders?

Since the last cylinder was discarded, there are only 976 left.
Increasing the number of cylinders per cylinder group to 17 will
recover it (and will change the number of cg's to 57.5, i.e., 58.)  The
price of larger cylinder groups is increased seek times within a single
cylinder group, i.e., slower file access.  In return, it buys back
1120*(61-58) Kbytes (3.3MB) at 2048 bytes per inode, or 752*(61-58)
Kbytes (2.2MB) at 6144 bytes per inode, and of course regains the final
cylinder (.6MB).  The reduction in speed might be immeasurable, or
might be quite large; only detailed analysis, or trying it both ways,
will tell.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek@ee.lbl.gov