[comp.sys.amiga.tech] MaxTransfer for 1.3 FFS

jps@ulysses.homer.nj.att.com (John P. Snively) (11/07/88)

[I'm posting this for Don DeCourcelle (ihnp4!ulysses!mtsbb!dad). Please
send replies directly to him. -- John Snively, AT&T Bell Labs ]

Someone on the net complained that their 1.3 FFS could copy files OK but
would crash when they tried to execute something.  They mentioned a GVP
disk controller I think.  Anyway, I also experienced this problem with
my Phoenix 20MEG system on an A1000.  More specifically I noticed that the
machine hung when I tried running large executables.  Small 20-30K
programs worked fine, but 80K programs hung consistently (Dpaint II was
test case).  In addition, the programs that hung could be copied to
RAM: and run from there OK! 

I thought the way my system hung was suspicious too... the system froze with
the hard drive read-light stuck ON.  Rebooting the system did not clear this.
You had to cycle the drive power to reboot which indicates that the SCSI
controller hung (its mounted in the Phoenix drive's box). 

Another point: the 1.3 manual explained that one of the reasons FFS was faster
was that is stored the disk data in a form that could be directly executed, and
that FFS was capable of large continuous reads.   HMMmmm!!..... large
contiguous reads huh... and large programs hang... seems to me that the drive
controller isn't clocking large IOs... better call the factory!  Sounds like
I hitting some kind of transfer threashold.

I called Phoenix and explained my problem.  They said that their
mountlist was different than mine.  Specifically they added the lines:

Priority = 10
MaxTransfer = 50000

A HA! There WAS a transfer limitation for my controller.  I have no
idea why.  Anyway I tried this fix, and the problem improved.  I eventually
set MaxTransfer down to 30000 before all problems disappeared (or at least I
haven't noticed any more problems).  I hope this helps.

	-Don deCourcelle

PS.
Anybody know why "MaxTransfer" is required, i.e., the technical reason?

lphillips@lpami.van-bc.UUCP (Larry Phillips) (11/08/88)

 > Anybody know why "MaxTransfer" is required, i.e., the technical reason?

MaxTransfer is required for the FFS.  Since FFS will request the largest
possible number of blocks it can (contiguous sectors in a data file to be
read or contiguous free sectors for a write), the driver/controller needs
to be able to handle that size of request.  Most controllers are limited in
the number of sectors that can be transferred at once, and if the driver
does not allow for multiple operations to cover the requested data size,
you will end up with errors.  MaxTransfer simply limits the amount the FFS
will request in one chunk.

-larry

--
"Intelligent CPU?  I thought you said Intel CPU!" 
        -Anonymous IBM designer-
+----------------------------------------------------------------+ 
|   //   Larry Phillips                                          |
| \X/    lpami.wimsey.bc.ca!lphillips or van-bc!lpami!lphillips  |
|        COMPUSERVE: 76703,4322                                  |
+----------------------------------------------------------------+