[comp.unix.wizards] MSCP for you and me

nessus@athena.mit.edu (Doug Alan) (10/26/88)

Thanks to Chris Torek and Michael Bushnell for helping with some
problems I have had with the BSD4.3 MSCP disk driver.

>> If you accidentally attempt to boot a system on a
>> disk drive for which there is no partition table in the kernal, BSD
>> kindly trashes the filesystem for you.

> [Chris Torek:] The 4.3BSD UDA50 driver treats unknown disks as type
> `ra81'.  I considered this bogus [...]

This could defintely explain why my root filesystem got trashed if I
had a bigger-than-normal root filesystem.  But I always make my root
filesystems be no bigger than 15884 sectors to prevent exactly this
sort of disaster....

>> I also have a more academic question: A BSD filesystem is supposed
>> to begin on a cylinder boundary for performance reasons.  Is swap
>> space also supposed to begin a cylinder boundary, or does it make no
>> difference?  I know there's "tunefs", and there's tuna fish, but
>> there's no "tuneswap"....

> It hardly matters, but if everything else starts and ends on a
> cylinder boundary, the free region(s) left over for swap space will
> do so as well.

Ah, but since I keep my root partitions down at 15884 sectors, the
root partition doesn't end on a cylinder boundary.  Right now I start
my swap space (on an XT8760) at 16380, instead of 15884, so that it
will begin on a cylinder boundary.  I haven't lost a whole lot of
sleep over it, but I have wondered on occasion, whether there really
is any advantage at all in wasting those 496 sectors....

I have another question for BSD disk driver wizards: I discovered
sometime recently that if I use "tunefs" to change "maxcontig" for a
filesystem from 1 to 2, the read performance of the filesystem (for a
single process) increases about 25%.  Increasing the setting to 3 did
not result in any increase in performance, so I left it at 2.  (We use
8K blocks.)

The man page for "tunefs" says you should only increase "maxcontig" on
a device driver that can chain several buffers together in a single
transfer.  Can the MSCP device driver do this?  Is there any reason
why I shouldn't leave "maxcontig" set to 2.  My rough benchmarks were
only for a single process.  Does anyone think this might degrade
system performance.  Another issue is that our disk controllers are
smart, and you can set them to prefetch blocks (the controller also
does caching -- prefetched blocks go into the cache).  Setting the
prefetch to be 32 sectors (2 blocks) also resulted in another little
increase in filesystem read performance for a single process.

|>oug /\lan

   (or nessus@athena.mit.edu
       nessus@mit-eddie.uucp)

chris@mimsy.UUCP (Chris Torek) (10/26/88)

>>[Chris Torek:] The 4.3BSD UDA50 driver treats unknown disks as type
>>`ra81'.  I considered this bogus [...]

Actually, I think this was wrong.  I looked again at what was supposed
to be the 4.3BSD release (according to our RCS files, not the SCCS
files at Berkeley), and it should have complained about unknown drive
types.  (It did a switch on the number part of the encoded media ID.)
But again, all those third-party disks claim themselves to be RA81s anyway.

In article <10337@eddie.MIT.EDU> nessus@athena.mit.edu (Doug Alan) writes:
>I have another question for BSD disk driver wizards: I discovered
>sometime recently that if I use "tunefs" to change "maxcontig" for a
>filesystem from 1 to 2, the read performance of the filesystem (for a
>single process) increases about 25%. ...

(Obviously you are not using a UDA50---I never got any repeatable change
in any performance measurements I ran for any tunefs parameters.)

>The man page for "tunefs" says you should only increase "maxcontig" on
>a device driver that can chain several buffers together in a single
>transfer.  Can the MSCP device driver do this?

The driver does not do it itself, but the controller could quite easily.

>Is there any reason why I shouldn't leave "maxcontig" set to 2.

No!  You might tell everyone what controller and drive this is, though.
The only way to find the proper tunefs values is to experiment; it is
easier if someone else has already done the experimenting. . . .

>...  Another issue is that our disk controllers are
>smart, and you can set them to prefetch blocks (the controller also
>does caching -- prefetched blocks go into the cache).  Setting the
>prefetch to be 32 sectors (2 blocks) also resulted in another little
>increase in filesystem read performance for a single process.

Probably because in those cases where the read-ahead block was
contiguous but was not delivered to the controller soon enough, the
second read command had to wait for most of a revolution.  When it was
delivered soon enough, the controller did the chaining whether or not
you had prefetching set; now it gets the block cached whether the
read-ahead is requested immediately or after a slight delay.

You might find that raising the read-ahead and the maxcontig factors
together helps further.  But you (or someone else) will have to try
it to be sure.

(It would sure be nice if controllers came with *real* documentation
as to characteristics like `code delays to search on-board cache',
`will (not) read full tracks without waiting for index', etc.  Then
you *might* be able to predict some of this, after instrumenting the
drivers carefully.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris@mimsy.umd.edu	Path:	uunet!mimsy!chris