[comp.sys.ibm.pc] Question about hard disk interleave factors

lane@dalcs.UUCP (John Wright/Dr. Pat Lane) (04/12/88)

I'm curious.  The default for interleave factor on most hard disk controllers
seems to be 3.  However, I have seen in several places where IBM and Compaq
recommend using 6 with their systems.  I have a Compaq Deskpro (8088) with a
Seagate ST225 (20 MEg) and a Western Digital W1002-WX2 (I may have the no. 
slightly wrong) disk controller.  The controller formats using 3 by default
and in fact, when I got the system the disk was apparently formatted at 3.
But the performance was lousy and so I tested every possible factor (1 to 16)
using a program which estimates the number of disk revs req. to read a track.
The optimum according to this test was 5.  I'm told that optimum interleave is
a complex combination of disk, controller, and CPU performance factors.  This
leaves me a bit dis-satisfied and I still wonder why IBM and Compaq systems 
should be different from all the rest in this respect.  Can anyone enlighten
me?  Thanks.

-- 
John Wright      //////////////////      Phone:  902-424-3805  or  902-424-6527
Post: c/o Dr Pat Lane, Biology Dept, Dalhousie U, Halifax N.S., CANADA  B3H-4H8 
Cdn/Bitnet: lane@cs.dal.cdn    Arpa: lane%dalcs.uucp@uunet.uu.net
Uucp: lane@dalcs.uucp or {uunet,watmath,utai,garfield}!dalcs!lane  

bcs212@vader.UUCP (Vince Skahan) (04/15/88)

I have a Tandy 20MB hard card that came with a recommended
interleave factor of 3.  The guy at the Radio Shack Computer
Center said they'd done extensive local testing and recomended
using 2.

I tested it myself and found "4" was the answer (giving a 450%
increase in speed over both of the Tandy-recommended settings).
Unfortunately, I believed them for 6 months and had to live with 
the slow speed until I had time to test different settings myself.
-- 
     Vince Skahan            Boeing Computer Services - Phila.
    (215) 591-4116           INTERNET:  bcs212%psev@boeing.com
UUCP: {...rutgers!bpa | uw-beaver!uw-june!bcsaic}!vader!bcs212 

gerard@tscs.UUCP (Stephen M. Gerard) (04/20/88)

In article <2832@dalcs.UUCP> lane@dalcs.UUCP (John Wright/Dr. Pat Lane) writes:
>I'm curious.  
.
. Portions omitted
.
>The optimum according to this test was 5.  I'm told that optimum interleave is
>a complex combination of disk, controller, and CPU performance factors.  This
>leaves me a bit dis-satisfied and I still wonder why IBM and Compaq systems 
>should be different from all the rest in this respect.  Can anyone enlighten
>me?  Thanks.

Computing the optimum interleave factor for MS-DOS machines is indeed a
"Bag of Worms"!

I have attempted to describe the key problems that come into play when 
attempting to obtain the optimal performance from your disk drive.  It is hard 
to do this without getting into Operating System or hardware design.  

There are basically four factors that affect the performance of the disk
subsystem.  They may be classified as:

1.) Ability of the disk controller.				(hardware)

2.) Ability of system board to transfer data.			(hardware)

3.) Intelligence of the Operating System buffering scheme.	(software)

4.) (lacking #3) Intelligence of the applications program.	(software)

At the hardware level, the disk controller must be able to read/write
the data at a high enough rate so that it can receive a command to handle
the next logical record following the interleave sequence being used.  If it
can not, the controller is forced to wait until the next disk revolution
places the desired sector under the disk drives read/write head.  If the
controller was able to handle the data quickly enough, it may still have been
held up by the inability of the system boards data bus to transfer the data
before the next logical sector passes under the disk head.  At the hardware
level this is all pretty much straight forward.  Either the disk controller
and the system board can support the selected interleave factor or they can
not.

At the software level, the applications program makes a request to the 
Operating System (O/S) for a particular chunk of data to be read from or 
written to the disk.  Without getting into a discussion of how an O/S does or 
should handle disk I/O, for reads let's say that the O/S will check its 
internal buffers and see if the requested block of data resides in memory.  If 
the requested block is in the O/S's buffer, it will pass the block to the 
applications program.  If the buffer is not in memory, the O/S will issue a 
command to the disk controller to read the selected block from the disk drive.  
When that block has been read from the disk, it will be given to the 
applications program.  Each time the applications program requests a block of 
data from the O/S, a certain amount of overhead is incurred.  Generally 
speaking, the larger the block of data requested by the applications program, 
the less overhead will be incurred.  This is to say, the less the O/S is 
involved, the higher the data transfer rate.  Not all applications programs 
read/write the same size blocks of data.  Applications programs that use large 
blocks of data (buffers) are more able to handle higher data transfer rates 
that are attainable with lower interleaves.  Of course, up to now we have 
assumed that the applications program is not attempting to process any of the 
data in between reads and writes.  Overhead caused by the applications program 
processing the data as it is being read may cause the disk controller to wait 
for the next revolution of the disk before it may perform the requested read.  
For example, a word processor that reads the entire document into memory 
before it attempts to figure out how to format the document can achieve a 
higher data transfer rate than a word processor that formats each block as it 
reads it.

Ok, what does all of this mean?

Well, quite simply, an applications program may perform better with a higher
interleave factor than with a lower interleave factor that was selected by using
a program such as "CORETEST".  CORETEST does nothing but read data, by default
it uses a 64K buffer.  A typical applications program may only read 512 bytes
or even less with each read.  By the time you add in the overhead caused by
the O/S, chances are pretty good that the next record may have already passed
the disks read/write head.

Programs should load faster with a lower interleave factor.  This is because
DOS would allocate a chunk of memory and load the program into that memory
using large buffers.  By the same token, the DOS copy command should also
achieve better performance with a lower interleave as it again only has to
handle large chunks of data.

Database applications may run faster with a higher interleave factor depending
on how poorly the disk I/O code was written and what data is being processed
between disk reads.

What can be done to improve performance?

Use a better disk controller.  Some controllers now have built-in cache 
memory.  If the requested disk record is already in cache memory, it may be 
transfered without waiting for it to be read from the disk.  

Improve DOS, if DOS would read an entire track into its buffers, DOS could,
in many cases, supply the next record requested by the applications program
without needing to go to the disk controller.

Improve applications programs, use large buffers for disk I/O.

Optimize your disk drive often.  Use a program like "SD" which is included
with Peter Norton's Advanced Utilities.

Summary:

The best way to tell which interleave factor you should be using is to
try each one with the applications program that you use the most.  With
the WD-1002 you are using, an interleave of 5 is most likely the best you
can do.  With an Adaptec ACB-2010, try 2 through 5.  With an OMTI 5520, try
1 through 5.

The following formula may be used to calculate the maximum data transfer rate
of a disk drive.  The actual transfer rate will be lower due to system speed,
disk controller, bus width, Operating System overhead, the applications program,
etc.

		 Sectors-Per-Track * Sector-Size * RPM
	KB/S =  ---------------------------------------
		          Interleave * 61440

KB/S = Kilo Bytes per Second

Sectors-Per-Track = 17 for MFM drives
		    25 or 26 (controller dependant) for RLL

I hope this helps take a little bit of the mystery out of this Bag of Worms.

ray@micomvax.UUCP (Ray Dunn) (04/27/88)

In article <188@vader.UUCP> bcs212@vader.UUCP (Vince Skahan) writes:
>I have a Tandy 20MB hard card that came with a recommended
>interleave factor of 3.  The guy at the Radio Shack Computer
>Center said they'd done extensive local testing and recomended
>using 2.
>
>I tested it myself and found "4" was the answer (giving a 450%
>increase in speed over both of the Tandy-recommended settings).....

4 may have been the answer, but what was the question??

Unfortunately, the "best" interleave factor is dependant on the application,
and if 4 happens to be the best for say a copy, which is reading a track or
cylinder of info in on one dos call, this does not mean it will be optimum
when running a disk intensive application program.

Not that it has anything to do with PC's, but in an ideal system, the
interleave factor should be a specifiable attribute at file write time, in
a hardware self-adjusting form, say millisecs per some cpu performance unit,
and thus be application tailored.