[comp.sys.ibm.pc] Adaptec 1542 Kernel tuning with 386

neese@adaptex.UUCP (07/06/90)

>behm@zds-ux.UUCP (Brett Behm) writes:
>
>>I really do not know what to make of these results.
>
>I don't either.  In all cases the tests were executed on the same
>location on the disk - with NOTHING else running..

Okay, here are some explanations.  The problem with running the DMA, on the
adapter, much faster than the SCSI bus can cause many more disconnects/
reconnects as the drive buffer could be emptied much faster than it could
fill it.  This ony becomes and issue when running synchronous.  Async
transfers all always much slower than the slowest DMA rate the adapter
can provide.  The differences in transfer rates can be skewed all over the
place depending on the drive.  In some cases retries may be invoked and
then again may not be invoked.  This causes some wierd numbers to be
displayed.  The other case is where the data may not be in sequential
order.  With the sparing a SCSI device does, it may cause havoc to be
wreaked on benchmarks.  If the sparing is not in-line, then the disk
may have to make a long seek to get the data you thought was sequential.
If the sparing causes a complete track replacement, then the data rates
will be even more erratic.
You are also at the whim of the driver/OS.  If the driver does not take
advantage of host adapter command queuing for any device on the bus, then
the overhead for adapter command setup can vary dramatically.
Tuning the adapter may not make any difference if the adapter is not being
pushed to its limit.  While the drive/adapter/CPU combination will virtually
dictate the data transfer rates possible.  The way the driver works will
alter the results dramatically.  If the adapter is not kept busy, then
performance won't cahnge much no matter how much tuning you do.  For
instance, very few drivers will use the command queuing feature of the
adapter.  Most folks don't understand the advantage of keeping at least
2 commands per device going on the adapter.  The overall advantage to this
is; when one command os completed and the driver is in the interrupt
portion of the code, another SCSI command is being placed on the bus by the
adapter, thus virtually cutting the command overhead in half.  The normal
way UNIX drivers are done is to issue a command and at interrupt time,
recall start to get another command going.  Drivers written this way will
not really show the performance the adapter/SCSI bus is capable of.  It
also causes system overhead to be much higher.  If the driver instead
issues up to 2 commands for any given device at a time, then the adapter
would be kept busy enough for tuning of the bus on/off and DMA rates to
make a difference.
Without keeping the adapter busy, tuning of these variables will not make
that much difference.  I do not know how ISC implemented thier driver, but
would be willing to bet that it does not do multiple queuing, but if it
does, then tuning these variables will make much more difference, but
only if you are running synchronous.  With that being a simple jumper change,
on the adapter, I can't see why anyone would not run in that mode.


			Roy Neese
			Adaptec Senior SCSI Applications Engineer
			UUCP @  uunet!swbatl!texbell!
				  {nominil,merch,cpe,mlite}!adaptex!neese