[comp.sys.att] Step rate change

lenny@icus.islp.ny.us (Lenny Tropiano) (08/23/89)

In article <1182@mitisft.Convergent.COM> dold@mitisft.Convergent.COM 
(Clarence Dold) writes:
...
|>Try setting the Step Rate in an iv.desc file to 14 instead of 0, 
|>then iv -u the disk.  No loss of data, just a 20% increase in seek performance
|>on 28mSec disks.
|>
|>Ignore any significance in my signature.
|>I don know nuddin about the Unix PC.
|>The iv part, in particular, is untried, untested, and subject to failure.
|>(but somebody that is well backed up should tell us if it works.)
|>

Ok, call me brave, but I had to try it...  It sorta was a test for my WD2010
anyhow.   I have another UNIX pc (icusdvlp) that I use for such occasions,
I never subject my main machine (icus) to anything this terrible (well, almost
never) ...

I proceeded tonight to try what Clarence Dold from Convergent mentioned
above, change the step rate, rewrite the VHB and see what difference that
will make, if any.

This is how I went about it.

First I had to see if the WD2010 that I got a while back (not from Thad,
but thanks to him he finally pushed me to install it and test it ...) was
going to work.  I installed it in my UNIX pc, 40MB, 2M RAM, pretty much
vanilla, except that Gil last weekend did the motherboard wiring work on the
machine to make it two (2) hard drive *ready*. (Thanks) When I find a need for
two there, he'll make the upgrade board and we'll be in business.  

I powered up the machine in diagnostics, and ran the usual disk tests,
sequential seek, random seek, etc...  All tested just fine.  Step one was
verified, the WD2010 worked!

Now I booted the machine with the floppy boot disk, and then the floppy
filesystem so I could have a single user (very stripped down) floppy UNIX.
(Version 3.51).

Here's a very close representation of what I did ...  (excuse me if there
are any typos).

# /etc/umount /dev/fp002
# /etc/fsck -s /dev/rfp002		| I have /etc/fsck on my Floppy UNIX
					| and I figured I would salvage the
					| freelist while I was at it.
...
# /etc/mount /dev/fp002 /mnt
# cp /mnt/bin/dd /bin			| copy some utilities we'll need for
# cp /mnt/bin/time /bin			| benchmarking ...
# /etc/umount /dev/fp002

# /bin/time /bin/dd if=/dev/rfp002 of=/dev/null bs=100k

	| I have standard partitioning for multi-user (in other words,
	| swap is 5MB and the rest of the 40MB drive is /dev/fp002 (slice 2)

359+1 records in
359+1 records out

real	1:39.7
sys	   0.0
user	   2.9

	| I did this for several iterations, and 100k make the seeking rather
	| fast... without a buffer it would have taken a VERY long time

	| Iteration #2

real	1:39.4
sys	   0.0
user	   3.1

	| Iteration #3

real	1:39.6
sys	   0.0
user	   3.1

	| Iteration #4

real	1:39.7
sys	   0.0
user	   3.0

...

	| Ok, now the fun and *dangerous* stuff ...   Jumping into in 
	| chartered territory.  

# mount /dev/fp002 /mnt			| get the hard disk ready ...
# PS1="## "				| just signify the last shell with
					| a different prompt
## /mnt/etc/chroot /mnt /bin/sh		| chroot(2) to the Hard disk
# iv -dv /dev/rfp000 > HD.desc

	| Here I wrote out the description table (current one) of the VHB
	| and BBT (bad block table).   This is important, so when you write
	| the new VHB it matches exactly, except for the item we're about
	| to change...

# /bin/ed HD.desc
/step
steprate	0
s/0/14/p
steprate	14
w
131

	| Ok, we changed the steprate to 14 like Clarence suggested ...
	| Time to hold my breath..

# iv -uv /dev/rfp000 HD.desc
Device type: HD	  Name: WINCHE
Cylinders: 1024   Heads: 5	Sectors: 17

* Phase 1 - Initializing Internal VHB
* Phase 2 - Writing out new VHB
* Phase 3 - Writing out new BBT
* Phase 4 - Allocating download areas.
	    Writing out new loader.
	    3 partitions, 23 blocks for loader, Bad Block Table Allocated

	| Ok, everything seems normal yet.  Ahhhhhh....

# exit
## /etc/umount /dev/fp002
## /etc/fsck /dev/rfp002
...
## PS1="# "

	| Time to repeat above tests ...

# /bin/time /bin/dd if=/dev/rfp002 of=/dev/null bs=100k
359+1 records in
359+1 records out

real	1:40.1
sys	   0.0
user	   3.1

	| Hmmm, nothing ... no improvement, so far longer ...

	| Iteration #2

real	1:40.2
sys	   0.0
user	   3.0

	| Iteration #3

real	1:41.0
sys	   0.0
user	   3.2

	| Iteration #3

real	1:40.1
sys	   0.0
user	   3.0


Nada .. Oh well, for this long winded explanation, essentially you
can't improve the seek performance by 20% ... In fact it might just
be hindering it, but for 1 second differnce that could mean just about
anything.   If someone has a better test for seeking, let me know, I'm
willing to try this again under different stress test values ... 

						Take care ...
						Lenny
-- 
Lenny Tropiano             ICUS Software Systems         [w] +1 (516) 589-7930
lenny@icus.islp.ny.us      Telex; 154232428 ICUS         [h] +1 (516) 968-8576
{ames,pacbell,decuac,hombre,talcott,sbcs}!icus!lenny     attmail!icus!lenny
        ICUS Software Systems -- PO Box 1; Islip Terrace, NY  11752

dave@galaxia.Newport.RI.US (David H. Brierley) (08/24/89)

In article <947@icus.islp.ny.us> lenny@icus.islp.ny.us (Lenny Tropiano) writes:
>In article <1182@mitisft.Convergent.COM> dold@mitisft.Convergent.COM 
>(Clarence Dold) writes:
>...
>|>Try setting the Step Rate in an iv.desc file to 14 instead of 0, 
>|>then iv -u the disk.  No loss of data, just a 20% increase in seek 

>Ok, call me brave, but I had to try it...  It sorta was a test for my WD2010

># /bin/time /bin/dd if=/dev/rfp002 of=/dev/null bs=100k

>Nada .. Oh well, for this long winded explanation, essentially you
>can't improve the seek performance by 20% ...

But you didn't test seek performance, you tested straight sequential access
to the raw disk.  If you want to test seek performance you have to get the
heads to go back and forth and all around.  To really test seek performance
you would need a special program that did random seeks on the disk, but a
quick approximation might be achieved with:
	"find / -print | cpio -oB >/dev/null"
This will at least cause the heads to jump around a little more than the "dd"
would.

If this trick really can improve seek performance it would be a great boon
to people with limited amounts of memory (i.e. <= 1 meg) because most of the
performance degradation on the machine is caused by frequent accesses to the
swap area.  An access to the swap area requires a seek back to the beginning
of the disk, an i/o, and then another seek back to wherever you had the heads
positioned previously.
-- 
David H. Brierley
Home: dave@galaxia.Newport.RI.US   {rayssd,xanth,lazlo,mirror,att}!galaxia!dave
Work: dhb@rayssd.ray.com           {sun,uunet,gatech,necntc,ukma}!rayssd!dhb

psfales@cbnewsc.ATT.COM (Peter Fales) (08/24/89)

In article <947@icus.islp.ny.us>, lenny@icus.islp.ny.us (Lenny Tropiano) writes:
> 
> Nada .. Oh well, for this long winded explanation, essentially you
> can't improve the seek performance by 20% ... In fact it might just
> be hindering it, but for 1 second differnce that could mean just about
> anything.   If someone has a better test for seeking, let me know, I'm
> willing to try this again under different stress test values ... 

Well, I am not quite as brave as Lenny, but I wanted to try this out too.
Since I didn't feel comfortable writing a new VHB on my hard disk, I needed
some way to patch the copy of the VHB read in by the kernel.  It probably
could be done with the kernel debugger, but I found that an easy way was
to pick a random installable device driver and add the line

	gdsw[0].dsk.step = 14;
	
to the installation routine.   This seemed to work because when I do
a GDGETA on the disk, the value returned is 0 before installing this 
driver and 14 afterwords.

However, my results were much the same as Lenny's.  I tried a couple
of different benchmarks:  1) running compress on an 800K file keeps
the disk busy for about 1 minute.  Changing the step rate made no
measurable difference.  2) Trying to come up with test that would require
many long seeks, I tried doing a dd from /dev/rfp001 to /tmp/jnk.
Surprisingly, this did not result in a particularly noisy disk, but it
did take about 7 minutes to run.  And again the difference between the two
times with different step rates was within the noise.

Finally, I tried the first test again with a step rate of 13.  According
to the manual, this is a 6.5 millisecond step rate as opposed to the
35 MICROsecond step rate used when the step parameter is zero.  Again
no difference.

So far, the bottom line seems to be:  If you don't need the larger disk,
the WD2010 is not going to help performance.  If any one else gets 
different results, please let us know.

-- 
Peter Fales			AT&T, Room 5B-420
				2000 N. Naperville Rd.
UUCP:	...att!peter.fales	Naperville, IL 60566
Domain: peter.fales@att.com	work:	(312) 979-8031

lenny@icus.islp.ny.us (Lenny Tropiano) (08/25/89)

In article <947@icus.islp.ny.us> lenny@icus.islp.ny.us (Lenny Tropiano) writes:
|>In article <1182@mitisft.Convergent.COM> dold@mitisft.Convergent.COM 
|>(Clarence Dold) writes:
|>...
|>|>Try setting the Step Rate in an iv.desc file to 14 instead of 0, 
|>|>then iv -u the disk.  No loss of data, just a 20% increase in seek 
|>|>performance on 28mSec disks.
|>|>
|>
... [ a bunch of stuff on how-to do it all left out, plus some erroneous
      hypotheses ] ...

|>Nada .. Oh well, for this long winded explanation, essentially you
|>can't improve the seek performance by 20% ... In fact it might just
|>be hindering it, but for 1 second difference that could mean just about
|>anything.   If someone has a better test for seeking, let me know, I'm
|>willing to try this again under different stress test values ... 
|>

As people pointed out, there were two things inherently *WRONG* with my
original hypothesis about the seek rates ...  Firstly I was testing
I/O throughput, and not seeking ... at least not major seeks, it was 
track to track seeking...  Secondly, something I thought about after
Peter Fales mentioned about changing the gdsw tables in memory to 14,
what I neglected to do was REBOOT!   That means if I changed the VHB,
the step-rate information would have remained the same in core memory...
At least this is what I assume ...  I doubt the kernel rereads the
VHB and rewrites the gdsw tables.

Well after thinking that through, I wrote a simple hack (for those
who want the disk exerciser, let me know) that just lseek()'d to the
first position of the hard disk (/dev/rfp000) [ie. 0] and the last
position of the hard disk ...  [ie. (maxcyls*maxheads+maxheads) * 
sectorsize * sectors-per-track ]
^^ should be = 512      ^^ should be = 17

I didn't really seek to the total end, I was about one sector off (512
bytes) and then I read 512 bytes in ...   I did this in a loop for
100 iterations, averaged all the iterations using the times(2) system
call.

Basically I called times(2) before the lseek() and read at the the 
beginning, and after the lseek() and read at the end ...  I subtracted
time_after and time_before and averaged them over the 100 iterations.
I ran this program 5 times (100 iterations each) in step-rate 0, and
step-rate 14.  Wow! There was a difference.

Here are the average results ...
WD2010 installed
Step-rate = 0

Iteration #1 avg time = 51
Iteration #2 avg time = 51
Iteration #3 avg time = 51
Iteration #4 avg time = 51
Iteration #5 avg time = 51

Step-rate = 14
Iteration #1 avg time = 37
Iteration #2 avg time = 36
Iteration #3 avg time = 37
Iteration #4 avg time = 37
Iteration #5 avg time = 37

So on the average it was 14 time units (60th of a second) faster ...
That's a 27.4% increase in seek performance, Thanks Clarence!

It seems faster overall anyhow ...  I wonder how it will work on
those 20 megger's out there with 67ms seek time ...

I'd like to hear those success stories too!

-Lenny
-- 
Lenny Tropiano             ICUS Software Systems         [w] +1 (516) 589-7930
lenny@icus.islp.ny.us      Telex; 154232428 ICUS         [h] +1 (516) 968-8576
{ames,pacbell,decuac,hombre,talcott,sbcs}!icus!lenny     attmail!icus!lenny
        ICUS Software Systems -- PO Box 1; Islip Terrace, NY  11752

jbm@uncle.UUCP (John B. Milton) (08/27/89)

In article <947@icus.islp.ny.us> lenny@icus.islp.ny.us (Lenny Tropiano) writes:
>In article <1182@mitisft.Convergent.COM> dold@mitisft.Convergent.COM 
>(Clarence Dold) writes:
>...
>|>Try setting the Step Rate in an iv.desc file to 14 instead of 0, 
>|>then iv -u the disk.  No loss of data, just a 20% increase in seek performance
>|>on 28mSec disks.
>|>
>|>Ignore any significance in my signature.
>|>I don know nuddin about the Unix PC.
>|>The iv part, in particular, is untried, untested, and subject to failure.
>|>(but somebody that is well backed up should tell us if it works.)
>|>
>
>Ok, call me brave, but I had to try it...  It sorta was a test for my WD2010

I had my doubts, now I know. Changing the step rate WILL NOT increase the
step rate performance, only decrease it. This is a table of step rate values
and time delays:

 0	35us  (microseconds)
 1	0.5ms (milliseconds)
 2	1.0ms
 3	1.5ms
...
12	6.0ms
13	6.5ms
14	7.0ms
15	7.5ms

As a point of reference, the disk rotates at 3600 rpm, or 60 rps, 60 Hz.
1/60th of a second = 0.01666667 or about 17ms.

With the type of drives we use on the UNIXpc (ST 506 interface), the drive is
not told which cylinder to put the head on, but rather when to make ONE step,
and in which direction to make it. The same as for floppy drives. The HDC
(Hard Disk Controller, WD1010-05 or WD2010-05) can do seeks (step to the right
cylinder) in two ways. One is to set the cylinder you want, then issue a SEEK
command. This will send all the step pulses to the drive with the requested
delay between each pulse, then show complete. The drive may or may not ACTUALLY
be done seeking at this point, but the HDC shows complete and ready for the
next command. The second way to seek is to set the cylinder and the sector you
want to read or write, then issue the READ or WRITE command. The HDC will
compare the cylinder you want with the one the head is currently on, then issue
the appropriate number of steps in the right direction. The HDC will then WAIT
for the drive to show complete on the SC (Seek Complete) pin of the HDC. Some
drives will take a long time to show complete, others will show complete more
quickly, it all depends on how fast your drive can actually move the head. This
second kind of seeking is called an implied seek.

So, the value given to the HDC does not control how fast the drive seeks, only
how fast the step pulses are sent to the drive. The main reason the parameter
is available at all, is to deal with very old drives that can't take fast step
pulses, or can't tell the outside world when they're actually done with a
given seek. I have played around with several older hard drives, and I have
never found one that needed a slower step rate. Drives that DO need slow step
rates generally WON'T WORK AT ALL with fast step rates. As far as newer, more
intelligent drives go, the quicker you can send all the steps to the drive,
the quicker the drive can figure out where you really want the head. Once it
knows where you really want the head, it can seek most efficiently directly
to that position.

There is one other way to seek the drive. This method is called recallibrating.
The command in the HDC is RESTORE. With this seek method, the direction is
set towards lower numbered cylinders, and step pulses are sent to the drive
until the head gets to track 0. There is a special signal from the drive,
usually corresponding to an actual sensor, which is used to indicate when the
head is on track 0. The RESTORE command is used to get the head to a known
position, when the position of the head is not known or is suspect. After each
step pulse is sent to the drive, the HDC waits for the drive to show complete,
then tests the track 0 line to see if the head is all the way back. On most
drives this makes a very loud noise, because the head moves a short distance,
stops, and repeats. The RESTORE command is used when the machine is reset, or
on the UNIXpc, when there is a read error. If a hard drive shows not ready or
write failure during any command (including RESTORE), then the command is
aborted.

Now, for you intrepid folks who are out there writing seek testing programs
using the ioctl(f,GDSETA,gdctl) technique, WATCH OUT! The GDSETA interface
was providied ONLY FOR FLOPPY DRIVES!!! It was inteneded for programmers
who want to access non-UNIXpc (VHB) floppies (like Mtools). When a GDSETA
ioctl() is issued, the drive sizes in the "params" are used to reconfigure
the in-memory slot 0 partition table entry. All other partition table entries
for the specified drive from 1 to MAXSLICE are zerod. This is not good. The
swap and all file system partitions simply disappear. The next time there
is a page fault, the offending process is killed. When init page faults, it
is killed, UNIX pitches a bitch and panics.

So, if you want to diddle the step rate, use:
1. iv -u (with a reboot)
2. a loadable driver
3. nlist()/lseek()/read()/write() on /dev/kmem

but do not use GDSETA.

I tried #3, and didn't get any change in seek time. It may be that the step
rate is only set once, when the VHB is read during the boot process. The step
rate can only be set with a RESTORE or SEEK command, the other commands do
not have step rate delay bit fields in them. Becuase of this, changes in the
step rate field in the gdws table may not take effect until the seek SEEK or
RESTORE command. As far as I can tell, there are no SEEK commands done, except
when formatting a disk!, and RESTOREs only happen when the disk gets up to 10
restries on a bad spot, or during formatting. These are my method #3 results:

Max cylinder: 67608575
Old step rate was 0
Seeks per step rate: 1000
Step rate 0 (0):148
Step rate 1 (1):198
Step rate 2 (2):185
Step rate 3 (3):178
Step rate 4 (4):134
Step rate 5 (5):140
Step rate 6 (6):157
Step rate 7 (7):140
Step rate 8 (8):144
Step rate 9 (9):135
Step rate 10 (10):134
Step rate 11 (11):134
Step rate 12 (12):134
Step rate 13 (13):136
Step rate 14 (14):134
Step rate 15 (15):133
step reset to 0 (0)

Quite frankly, I think this dead horse has been beat.
I know I am
'night folks.

Anybody want to stay up all night and re-boot their machine 16 times?

John
-- 
John Bly Milton IV, jbm@uncle.UUCP, n8emr!uncle!jbm@osu-cis.cis.ohio-state.edu
(614) h:294-4823, w:785-1110; N8KSN, AMPR: 44.70.0.52; Don't FLAME, inform!

psfales@cbnewsc.ATT.COM (Peter Fales) (08/28/89)

> I had my doubts, now I know. Changing the step rate WILL NOT increase the
> step rate performance, only decrease it. This is a table of step rate values
> and time delays:
> 
>  0	35us  (microseconds)
>  1	0.5ms (milliseconds)
>  2	1.0ms
>  3	1.5ms
> ...
> 12	6.0ms
> 13	6.5ms
> 14	7.0ms
> 15	7.5ms

This is the table for the WD1010, however this is one case where the WD2010
differs from the 1010.  For the 2010, the last two table entries should be:

14	3.2 microseconds (I am sure of this)
15 	6.4 microseconds (I think, this is from memory)

> one that needed a slower step rate. Drives that DO need slow step
> rates generally WON'T WORK AT ALL with fast step rates. As far as newer, more
> intelligent drives go, the quicker you can send all the steps to the drive,
> the quicker the drive can figure out where you really want the head. Once it
> knows where you really want the head, it can seek most efficiently directly
> to that position.

Right, this is why Lenny et. al, think it may be possible to improve seek
performance.  I have a drive in my DOS-PC that does not begin moving the
head until ALL the step pulses have been received.  In this case, it
takes about 35 milliseconds to get 1000 step pulses out with a 35 usec
step rate.  If you could cut this down to 3.2 milliseconds by using step
rate 14, it would cut about 30 milliseconds off your maximum seek.  In 
the case of the DOS machine, I was able to get my average access time
down from about 85 to 65 milliseconds by reburning my controller ROM
to use the fastest step rate available on the controller.

On the other hand, I suspect that my Miniscribe 6085 begins moving the
head as soon as the first step pulse is received.  Since the step pulses
are always keeping ahead of the head movement, it doesn't make much difference
whether they come in at 35 or 3.2 microseconds each.  This MAY explain
why I get no difference between step rate 0 and step rate 14.  What it
DOESN'T explain is why I get no reduction in performance by using one 
of the very slow rates like 7 or 13.

> Now, for you intrepid folks who are out there writing seek testing programs
> using the ioctl(f,GDSETA,gdctl) technique, WATCH OUT! The GDSETA interface
> was providied ONLY FOR FLOPPY DRIVES!!! It was inteneded for programmers
> who want to access non-UNIXpc (VHB) floppies (like Mtools). When a GDSETA
> ioctl() is issued, the drive sizes in the "params" are used to reconfigure
> the in-memory slot 0 partition table entry. All other partition table entries
> for the specified drive from 1 to MAXSLICE are zerod. This is not good. The
> swap and all file system partitions simply disappear. The next time there
> is a page fault, the offending process is killed. When init page faults, it
> is killed, UNIX pitches a bitch and panics.

You are right that you can't use GDSETA, but my results were not so dramatic.
As  soon as I ran the program, my prompt came back, but any attempt to
execute a command resulted in no output and another prompt.  The only way
I could recover was to reboot.

> So, if you want to diddle the step rate, use:
> 1. iv -u (with a reboot)

This is the method I have been using, complete with the reboot.

> I tried #3, and didn't get any change in seek time. It may be that the step
> rate is only set once, when the VHB is read during the boot process. The step
> rate can only be set with a RESTORE or SEEK command, the other commands do
> not have step rate delay bit fields in them. Becuase of this, changes in the
> step rate field in the gdws table may not take effect until the seek SEEK or
> RESTORE command. As far as I can tell, there are no SEEK commands done, except
> when formatting a disk!

Wow, does this explain why changing the value even in the VHB does not 
affect anything?  How about it JCM?

-- 
Peter Fales			AT&T, Room 5B-420
				2000 N. Naperville Rd.
UUCP:	...att!peter.fales	Naperville, IL 60566
Domain: peter.fales@att.com	work:	(312) 979-8031

jcm@mtunb.ATT.COM (was-John McMillan) (09/01/89)

In article <2729@cbnewsc.ATT.COM> psfales@cbnewsc.ATT.COM (Peter Fales) writes:
:
>
>> I tried #3, and didn't get any change in seek time. It may be that the step
>> rate is only set once, when the VHB is read during the boot process. The step
>> rate can only be set with a RESTORE or SEEK command, the other commands do
>> not have step rate delay bit fields in them. Becuase of this, changes in the
>> step rate field in the gdws table may not take effect until the seek SEEK or
>> RESTORE command. As far as I can tell, there are no SEEK commands done, except
>> when formatting a disk!
>
>Wow, does this explain why changing the value even in the VHB does not 
>affect anything?  How about it JCM?

Thanks, Pete...

The SEEK command is only used during formatting.  The RESTORE command
is used during formatting and upon the 10th (GDRETRIES-5) retry.

Unfortunately, you cannot easy force the former SEEK as, I believe,
the kernel and the leather restraints prevent re-formatting your
root drive.

Jumping up and down on a bad sector is questionable, and -- despite
our preferences -- some of us lack any bad disk sectors.  (Dysfunctional
brain sectors are discussed elsewhere.)

Soooo, a quick perusal of the LOADER indicates the only RESTORE it
performs is during an error retry -- and IT doesn't insert a rate,
presumably setting it to rate[0] == 35 us.

This leads to an awkward conclusion that (1) the ROM code is setting
the rate [?] and/or (2) it is only being set in systems lucky enuf
to have bad sectors read at least 10 times... OR *MOST LIKELY* one
of those bad brain sectors just kicked in and I've mist the target.

Thanks again, Pete....  more to figure out... Be back later......

john mcmillan	-- att!mtunb!jcm

PS: WD2020-05 StepRate[15]== 16us, not 6.4us as suggested elsewhere.

psfales@cbnewsc.ATT.COM (Peter Fales) (09/03/89)

Thanks to the combined efforts of a number of net folks, I
finally succeeded in getting my WD2010 to accept the faster step rate.

The hypothesis that step rate is only set in the ROM code and then
never touched unless there is a disk error seems to be correct.  I
was able to get my disk to use a step rate of 14 (3.2 microseconds)
which showed a significant improvement over the default rate of 0 (35 
microseconds) according to Lenny's exerciser.  I was also able to get
it to use a step rate of 13 (6.5 MILLIseconds) which, as expected,
produced a radical decrease in disk performance.  6.5 milliseconds
is a frequency of about 160 Hz, and I could hear the disk humming
as it slooooowwwwwlllllyyyy  moved the heads around the disk.

To get it to work, I changed the step rate parameter in the VHB
(using iv -u), then I had to figure out how to get the system to
execute a RESTORE.  I added the following lines to the "devrom" 
driver posted a while back (though any installable driver should
work, even one custom written for this purpose).

#include <sys/gdisk.h>

rominit()
{
	HD_BASE[H_COMMANDREG] = W_RESTORE+gdsw[0].dsk.step;
}

All this is doing is blasting out a RESTORE when the driver 
is installed as part of system initialization. 
I suspect that I may be on somewhat shaky ground here, because I
do no special checks to ensure that the WD2010 or anything else
in the system is not already busy.  But it is quick, easy, and seems
to work.  Maybe our kernel guru (hi JCM!) can provide more information.

I also had to make some minor changes to the INSTALL file to
tell masterupd that an init routine is now being provided, and to
ensure that masterupd is always run, even if /dev/rom already 
exists.  I can hear the RESTORE happen just after the "lddrv -a"
is executed, and there seem to be no unwanted side effects.

Once I made all the above changes, my benchmarks all of a sudden
started working faster.  My results from "testit" are summarized
here:

			Step Rate 0	Step Rate 14
		
long			9		7
random			5		4
converging		1232		1071
sequential		486		485

I haven't noticed much difference in the way the system behaves, but
I may try to do some comparisons on tasks like compiling large files
to see if it has any effect.

Thanks everyone!

-- 
Peter Fales			AT&T, Room 5B-420
				2000 N. Naperville Rd.
UUCP:	...att!peter.fales	Naperville, IL 60566
Domain: peter.fales@att.com	work:	(312) 979-8031

jdc@naucse.UUCP (John Campbell) (09/08/89)

From article <2903@cbnewsc.ATT.COM:, by psfales@cbnewsc.ATT.COM (Peter Fales):
: Thanks to the combined efforts of a number of net folks, I
: finally succeeded in getting my WD2010 to accept the faster step rate.
: 
: The hypothesis that step rate is only set in the ROM code and then
: never touched unless there is a disk error seems to be correct.  I
: was able to get my disk to use a step rate of 14 (3.2 microseconds)
: which showed a significant improvement over the default rate of 0 (35 
: microseconds) according to Lenny's exerciser.  
...
: -- 
: Peter Fales			AT&T, Room 5B-420
: 				2000 N. Naperville Rd.
: UUCP:	...att!peter.fales	Naperville, IL 60566
: Domain: peter.fales@att.com	work:	(312) 979-8031

Uh, I'm coming in a little late and a lot stupid, but didn't earlier
articles say all that was necessary was a reboot?  I would love to
know more since I put in my WD2010, changed (with iv) the step rate
to 14, rebooted and nada--no significant disk speed change.

What is a "RESTORE"?  How does the controller work?  Peter's article
was a tad thin for me to trust hacking a driver in order to force a
condition I don't understand yet.  (By the way, I only have 5 bad
blocks on my 67 Mb disk.  Am I a candidate for few retries and hardly
ever a restore?)

Any help appreciated, right now I'm leaving my step rate at 14 and just
hoping/waiting for a magical RESTORE to occur on it's own...

From vn Fri Sep  8 06:15:41 1989
Subject: Re: RE: patch utility
Newsgroups: comp.os.vms
References: <2946@quanta.eng.ohio-state.edu>

From article <2946@quanta.eng.ohio-state.edu:, by DAVISM@kcgl1.eng.ohio-state.edu (Michael T. Davis):
: 
: 	In article <8908301323.AA09921@crdgw1.ge.com>,
: SESSIONS%SPAREV.decnet@CRDGW1.GE.COM writes:
: 
:>>Subj:	Wanted: Patch utility for VMS
:>>From: rochester!uhura.cc.rochester.edu!rbr4@pt.cs.cmu.edu  (Roland Roberts)
:>>Message-Id: <2903@ur-cc.UUCP>
:>>
:>>I'm looking for a "patch" utility for VMS that operates as the one on
:>>Unix systems does.  This would make my job of updating some of the GNU
:>>software we keep a lot easier.  Thanks for any info!
:>>
:>>roland
:>
:>I can just Jerry cringing at the bits after reading this one! :-)
:>
:>Roland, you must not have done your homework on this one, and exhibits
:>what many on the network would consider as a waste of network bandwidth.
:>A patch utility comes with VMS. It is used by vms to patch itself when
:>the system manager runs vms upgrades. Find out more about it with
:>
:>$ help patch
: 
: 	But this is NOT a UN*X-compatible patch utility.  UN*X patch allows
: the updating of SOURCE files; VMS PATCH -- and this is from VMS HELP -- is
: for patching "...an executable  image,  shareable  image, or device driver
: image."  (Note the use of the term "image".)
: 
: 	Although not UN*X patch, a similar functionality may be found with
: the use of DIFFERENCES/SLP (somewhat analogous to UN*X diff) and EDIT/SLP
: (similarly analogous to UN*X patch).  As far as I know, there is no UN*X-
: compatible patch utility.  I would be interested in knowing if one exists,
: though.
: 

Um, I ported Larry Wall's patch program to VMS.  Also had to build a
working pd diff program that could be used with it.  I have both and
can either post or mail to individuals depending upon interest...

-- 
	John Campbell               ...!arizona!naucse!jdc
                                    CAMPBELL@NAUVAX.bitnet
	unix?  Sure send me a dozen, all different colors.