[comp.sys.atari.st] Disk R/W times for large files

exodus@uop.UUCP (Greg Onufer) (07/05/87)

Anybody care to explain what's wrong here?

I used GULAM's timing function to time misc. file copies onto several
formats of disk.  The tables should explain:

***** 505338 Byte file from near-empty HD partition to Floppy *****

Formatter   Fast/Slow   Tracks  Sec/Trk   # of 5-ms 'ticks'
===============================================================
DCFORMAT     FAST         82      9            7407
DCFORMAT     SLOW         82      9            7408
DCFORMAT     SLOW         80      9            7429
DCFORMAT     FAST         80      10           7640
DCFORMAT     FAST         82      10           7726
TWISTER      FAST         82      9?           8284

***** Same file from each disk to the same near-empty partition *****
	(File on HD deleted each time, and saved and deleted prior
	 to any timing)

Formatter   Fast/Slow   Tracks  Sec/Trk   # of 5-ms 'ticks'
===============================================================
DCFORMAT     FAST         82      9            7117
DCFORMAT     SLOW         82      9            7117
DCFORMAT     SLOW         80      9            7159
DCFORMAT     FAST         80      10           7355
DCFORMAT     FAST         82      10           7392
TWISTER      FAST         82      9?           7392

***** Same file from floppies to 580K RamDisk *****
	(Same conditions as above, file removed after
	 each copy, obviously)

Formatter   Fast/Slow   Tracks  Sec/Trk   # of 5-ms 'ticks'
===============================================================
DCFORMAT     SLOW         80      9            6784
DCFORMAT     FAST         82      9            7082
DCFORMAT     SLOW         82      9            7083
DCFORMAT     FAST         82      10           7399
TWISTER      FAST         82      9?           7478
DCFORMAT     FAST         80      10           7640


I wasn't overly scientific about this, and a file this large probably
brings into play some of GEMDOS's faults.  But it's slightly more
realistic this way.  There are serious flaws in some of these sector
skewing formats when these types of results are produced.  I may try
using DCFORMAT to 'copy' these disks and record the read times for an
entire diskette.  I assume that DCFORMAT's Slow/80/9 is the same as a
what is produced by the desktop formatter?


Greg Onufer (exodus)    1040ST        | Mail: University of the Pacific
GEnie: G.ONUFER         No less!      | UTH c21, Stockton, CA 95211
UUCP: ...!{lll-crg,ucbvax}!ucdavis!uop!exodus 49-6221-76.18.42 (Home-Germany) 
      ...!{ptsfa!cogent,cepu!retix}!uop!exodus  (209) 474-1795 (College-Un th

apratt@atari.UUCP (07/07/87)

in article <383@uop.UUCP>, exodus@uop.UUCP (Greg Onufer) says:
> 
> Anybody care to explain what's wrong here?
> 
> I used GULAM's timing function to time misc. file copies onto several
> formats of disk.  The tables should explain:
> 

Big files will copy fast only if big reads and writes are used. If
your copy program reads one sector, writes it to the floppy, then
reads another sector, you'll lose.  The shell we shipped with the
developer's kit a while ago, called COMMAND.PRG, used a 1000 (not 1K)
byte buffer.  This is a real problem.  Big reads are optimized
to read a whole track at a time (for instance).  When this is
the case, sector skewing will LOSE, because it takes multiple
revolutions to read the whole track.

For operations like file copy, the lesson is to use as big a buffer
as you can.  Don't create a static 8192-byte array: instead,
determine how much memory you have available and use all of it.

Here is a little code in Alcyon C (this depends on the variable
_break, set up by gemstart and changed when you use gemlib's malloc).
It returns the number of bytes available starting at _break, and that
stays valid as long as you do no function calls (especially not
to gemlib's malloc()).

long freemem()
{
    extern long _break;
    long dummy;		/* &dummy is something near the current sp */

    return (&ret - _break - 512);	/* 512 is a chicken factor */
}

If you have used Mshrink to return memory to the operating system
(which is the case if you set the STACK variable in gemstart.s
to 0, 1, 2, or 3), you may have more memory than this available
using Malloc (the OS call).  Malloc(-1L) returns the largest Malloc
request which can be satisfied.  If you Malloc this, use it as a
disk buffer, then Mfree it, you will not run into trouble.

/----------------------------------------------\
| Opinions expressed above do not necessarily  |  -- Allan Pratt, Atari Corp.
| reflect those of Atari Corp. or anyone else. |     ...lll-lcc!atari!apratt
\----------------------------------------------/	(APRATT on GEnie)

braner@batcomputer.UUCP (07/08/87)

[]

Greg didn't say what program copied the files.  I _guess_ it's the Gulam
'cp' command.  I don't know how it does it, but apparently not well.
500K in 36 seconds is about 14 Kbytes/sec.  The theoretical max is 22.5
Kbytes/sec (one track per rev).  My boot disks actually achieve that
when copying into the RAMdisk using 'Autodisk' (and the floppy is 'fast'
formatted).  Autodisk copies with a _huge_ buffer (the whole RAMdisk).
"Twister" formatted disks read about 80% as fast, and standard disks
about half as fast (i.e., twice as slow).

But my experiments (when modifying microEmacs, etc) show that, with typical
text files (<50K), a buffer of 9K (one DS track) yields a performance that
is very close to that of larger buffers.  That is with standard
("slow") formatted disks.  (The performance gradually levels off as you
increase the buffer size through 4.5, 9 and 18K.)

- Moshe Braner

egisin@orchid.UUCP (07/10/87)

In article <1643@?>, braner@batcomputer.tn.cornell.edu (braner) writes:
> But my experiments (when modifying microEmacs, etc) show that, with typical
> text files (<50K), a buffer of 9K (one DS track) yields a performance that
> is very close to that of larger buffers.  That is with standard
> ("slow") formatted disks.  (The performance gradually levels off as you
> increase the buffer size through 4.5, 9 and 18K.)

There isn't any point in making IO buffers a multiple of the
track size when using gemdos IO; it is unlikely the file begins at side 0, sector 1.
Making the buffer large is what matters.


I've been using disks formatted with 10 sector/track, and
was wondering if this is within the specs for the floppy controller,
or outside IBM specs, or what. Has anyone had problems with this format?

Does anyone have some C code that does floppy IO at the controller level.
I want to do a "track read". (I don't want assembler, I can get that
from the bios listing).

braner@batcomputer.tn.cornell.edu (braner) (07/15/87)

[]

The BIOS, as far as I know, does _not_ use the FDC in track-read mode.
Anybody knows how to do that?  Code (in AL OK)?

- Moshe Braner