[comp.sys.sun] Problems with Fujitsu-M2382K on Xylogic 735 on a 3/280

tran@sun.com (Tony Tran) (11/29/88)

I had the same problem with the Hitachi DK815 that I would like to
increase the inodes so that I can accomodate netnews partition.  No luck
at all.

Well, I guess that SUN OS is not like System V where you can increase the
inodes in a given partition with ease.

I would like to find out how other people work around this problem of
increasing inodes on SUN disks for the SUNs

   Tony Tran
  Versatec, Inc
  San Jose, Calif.
 {pyramid|sun|ames}!versatc!tran

aat@mace.cc.purdue.edu (Jeff Smith) (12/02/88)

> We recently received a Fujitsu-M2382K on Xylogic 735 and I was trying to
> set it flying on a 3/280 when I ran into problems with insufficient inodes
> in the client root partition...
> 
> (1) how come the "-i" option in "newfs" didn't do what the manual says
> it should do (I know, I know, manauls lie all the time :-< )?

actually, it does.  If you use "newfs -vN" you can see what it passes to
mkfs without actually running mkfs (make sure you use 'N' and not 'n'!).
Newfs passes the -i parameter to mkfs, which then silently adjusts the
number of inodes to what it thinks is right.

The real problem is with the constant MAXIPG in ufs/fs.h.  This is set to
2048 and puts a ceiling on the number of inode blocks per cylinder group.
Mkfs actually uses 2048 if you tell it to, but that's the maximum.  There
are more sectors per cylinder in big drives than in older, smaller ones,
e.g., there are about 900 sectors per cylinder in a Fujitsu Eagle, and
2241 in a Fujitsu Swallow IV.  Since the maximum number of inode blocks
per cylinder group is constant but the number of data blocks varies, the
percentage of inode blocks per data blocks is smaller in the Swallow IV
than in an Eagle.  Using 8 cylinders per cylinder group doubles the
percentage of inode blocks by halving the number of data blocks per
cylinder group.  You have to use a 4k/1k file system to have 8
cylinders/cylinder group.

As you noted, this has the undesirable side effect of crashing the host
when you attempt any reference to that file system.  The stack backtrace
looks like this:

    _panic(0xf076d88) + 44
    _segmap_unlock() + 7a
    _segmap_fault(0xf0de000,0xfb3c000,0x2000,0x3,0x0) + 8e
    _as_fault(0xf07e6a8,0xfb3c000,0x400,0x3,0x0) + a0
    _fbrelse(0xf0e5d44,0x0)	+ 1e
    _dirlook(0xf09e5b8,0xffff95bc,0xffff9570) + 2fc
    _ufs_lookup(0xf09e5c0,0xffff95bc,0xffff96bc,0xf0e5140,0xffff96e0,0x0) +1a
    _au_lookuppn(0xffff96e0,0x1,0x0,0xffff976c,0x0)	+ 20e
    _au_lookupname() + 34
    _lookupname(0xefffc4e,0x0,0x1,0x0,0xffff976c) +	1a
    _stat1(0xffff9a18,0x1) + 1a
    _stat(0xffff9a18) + c
    _syscall(0x26) + 15a
    syscont() + 6
    data address not found

> (2) is there something peculiar with the geometry of the M2382K that
> restricts the choice of # of cyl/grp as mentioned above?

No, the cyl/grp choice is wired into mkfs because it's wired into the file
system.

> (3) does anyone has a possible work around for this problem?

Wish we did.  We've reported the bug to sun.

Jeff Smith, aat@cc.purdue.edu
Purdue University, 210 Math Science, W. Lafayette, IN 47907, 317/494-1787

childers@avsd.uucp (Richard Childers) (12/14/88)

versatc!tran@sun.com (Tony Tran) writes:
>I would like to find out how other people work around this problem of
>increasing inodes on SUN disks for the SUNs

Well, you have to know what a disk is and what some of the architectural
attributes of long-term memory devices imply to use it, but 'mkfs' has an
arguement to tweak the ratio of inodes-to-disk-blocks ...

-- richard

..{amdahl|decwrl|octopus|pyramid|ucbvax}!avsd.UUCP!childers@tycho
AMPEX Corporation - Audio-Visual Systems Division, R & D

aat@mace.cc.purdue.edu (Jeff Smith) (12/15/88)

> We recently received a Fujitsu-M2382K on Xylogic 735 and I was trying to
> set it flying on a 3/280 when I ran into problems with insufficient inodes
> in the client root partition...

We received a reply from hotline@sun.com that helped us.  We got enough
inodes at the cost of losing about 1% of our data blocks.  I've appended
the suggested workaround.

Jeff Smith

 Work around:
	See the evaluation below. If the user can afford to loose a few
	tracks/sectors, he should remake the filesystem with the third
	scheme discussed above by reducing the number of cylinders/group.
	But now he should make sure that #sector * #tracks should be an
	even number.
	#/etc/mkfs /dev/rxy0e 94086 67 26 8192 1024 8 10 60 2048 t 0
	/dev/rxy0e:     94068 sectors in 54 cylinders of 26 tracks, 67 sectors
        	48.2Mb in 7 cyl groups (8 c/g, 7.14Mb/g, 2048 i/g)
	super-block backups (for fsck -b#) at:
	 32, 14048, 28064, 42080, 56096, 70112, 84128,
	#
	Notice that we have lost 1.8MB of data, #sectors=67, #tracks=26
	and the total number of sectors = 94068.
 Evaluation:
	The number of cylinders per group depend indirectly upon
	fs_cpc (cylinders per cycle in position table) and cpg has
	to be a multiple of cpc. cpc = 16 >> x, where x is the largest
	integer such that "track * sector = N * (2**x)", where N is an
	integer. In this case "track*sector" is an odd number, and hence
	x=0, and hence cpc=16. UFS depends upon the cpc to find out the
	rotational position of the sector. This would be a difficult bug
	to fix. See above workaround. With that one can reduce the number
	of cylinders per group.