braner@batcomputer.UUCP (01/31/87)
[] In BYTE benchmarks the Mac+ is FASTER in floppy speed than the ST. (Note that '+'). The speed of reading long files on the ST by the system, in the case where it simply dumps the data in RAM (e.g. loading a long program), is about one-half of the theoretical speed of 5 tracks per second (due to 300 rpm, which is 22.5 Kbytes/sec with 4.5K per track). Reading text files by programs that call the system for each character is a LOT slower, and writing is also slower due to the read-after-write verification. Some sophisticated programs (e.g. STCOPY 2.0) DO read the disk right at the theoretical limit. As for hard disks, in practice the speed is limited by track-to-track head movement speed and the system's approach to reading system info (e.g. directory and FAT sectors). TOS does not cache those, and ends up moving the head a LOT. I'm not sure what the Mac does, but I have read a review of several hard disks for the Mac in a Mac mag, and the conclusion was that in normal use all the hard disks had about the same speed (hinting of software (OS) limit on speed) and that that speed was only about twice the speed of floppies (I can't remember if it was the old (toy) Mac or the Mac+ (a real computer)). The mention of "HD reading at the theoretical maximum speed" must refer to some specialized software. (I am NOT trying to rekindle the "my computer is better" war. Leave the religious wars to the Ayatollas, please!) The speed of the DMA port on the ST will prove useful in the future (i.e. a year or two :-) when there will be faster hard disks around, TOS will (?) be cleaned-up, and/or the HD will be used with sophisticated disk-caching software. (Of course Apple will improve the Mac meanwhile...) - Moshe Braner
wheels@mks.UUCP (02/02/87)
During this discussion about disk transfer rates, I have not seen anyone mention the effects of the interleaf (interleave?) factor. By the way, Atari, what is the interleaving factor for the standard formatter? If, for example, the factor was two, then logical sector #2 would be two sectors beyond logical sector #1. While the intervening sector is passing under the heads, the OS or application can process the data from sector #1 and get ready for the next read. Some programs take more time than others to process what they have just read. If they take too long, they will miss the next logical sector and will have to wait one whole revolution to get it. If one knows in advance how a particular disk will be used, one can tweak the interleave to suit. For example, my original copy of ST Raider loads more quickly than my backup copy. You can actually hear the difference in the head stepping rate. I think the original must have different interleaving, tuned to the loading of programs into memory (little processing needed). The standard formatter must strike a compromise. If sequential logical sectors are too close, some programs will read (or write) more slowly than necessary. If the sectors are spread too far apart, fast programs are penalised and have to wait for the sector to come around. In my experience on various microcomputers, this one factor has had more effect on disk transfer rates than any other. Some formatters give a choice of interleave factor for hand-tweaking. By the way, if you're about to format a hard disk, and if you have the option, try different interleave factors before putting too much data on the disk. Gerry
mhorne@tekfdi.UUCP (02/04/87)
In regards to disk speeds on the ST... >During this discussion about disk transfer rates, I have not seen >anyone mention the effects of the interleaf (interleave?) factor. >By the way, Atari, what is the interleaving factor for the standard >formatter? Good question! The interleave factor AND the formatting (interleave) algorithm seems to have changed from the disk-based TOS to the ROM based TOS. I picked at the original disk-based TOS (from ST Internals) and it appears to work correctly (true sector interleave), but the ROM base version seems a bit wierd. Try setting the interleave factor to a fairly large number ( < 32 ) with the flpfmt() call and you will find your disk reads to approach the 'theoretical' limit as far as transfer rates are concerned, but you will have a fairly volatile disk! At least all of the disks I formatted this way were quirky. I noticed that as your interleave factor increases (0 < interleave < 32), the disk read times decrease. On the original interleave subroutine in TOS (disk), this shouldn't happen. If I remember correctly, you will get sectors numbered 1 2 3 4 5 6 7 8 9 with any interleave factor greater than 9, meaning any disk formatted with 9 < interleave < 32 will provide the same disk transfer rate. So, what's the poop: did Atari change their flpfmt() routine, and if so, why? >Some programs take more time than others to process what they have >just read. If they take too long, they will miss the next logical >sector and will have to wait one whole revolution to get it. Bravo! Someone figured it out! I have been sitting by quietly while people have been stating that the ST is SLOW on disk transfer rates, even though the actual data rate is high. This is entirely possible, but it isn't the hardware! Let's face it, most software for most computers rarely take advantage of the hardware's full capability. That would require more research by software engineers (extremely time consuming)! Which just happens to bring up another thing: For those of you that read this months Byte with the 1040 ST review, you will have noticed the traditional ritual of roasting Atari (could the ST possibly be a better machine than the Mac or IBM? Naaaaaa...) by Byte. Clearly the 'review' (har har!) was done by an idiot, or else he would have noticed that Atari Basic is the pits! In fact, he kinda liked it! But look at the disk reads and writes for the ST that were quoted in the article. Are you kidding? Does this tell me anything about the disk transfer capability of the ST? HELL NO! It tells me that Atari Basic is the pits, and that is all. Anybody interested in an ST that read the review will take one look at those figures and laugh all the way to the nearest Mac or IBM dealer. But hey, Byte says that it "is a good machine with only a few problems". Byte wouldn't be slightly biased, would they? >If one knows in advance how a particular disk will be used, one can >tweak the interleave to suit. For example, my original copy of >ST Raider loads more quickly than my backup copy. You can actually >hear the difference in the head stepping rate. I think the original >must have different interleaving, tuned to the loading of programs >into memory (little processing needed). Yes, they do have different interleaves. Not all formatting programs use the same interleave factor. >The standard formatter must strike a compromise. If sequential logical >sectors are too close, some programs will read (or write) more slowly >than necessary. If the sectors are spread too far apart, fast programs >are penalised and have to wait for the sector to come around. > Gerry True, there is some optimum spacing between sectors, though I haven't dinked around enough to figure it out. It can be done with just paper and pencil. I (unfortunately) haven't had the opportunity to finish writing a program to do my own formatting, but the concept is simple. The floppy controller can do a 'track write' which is used to format a complete track with a sector layout. One would only need layout a typical track in memory (with all of the necessary inter-rec gaps, etc., in place), tell DMA where the buffer is, then give a 'track write' command to the floppy controller. That is all the TOS routine is doing, but you could play around with the inter-record spacing, interleave, etc. You could easily get 10 sectors on a track, and possible 11 if you get rid of a large chunk of the end-of-track gap, though you get closer and closer to losing your data in the bit-pit if there isn't enough gap (e.g. when writing to the last sector on the track, you could wipe out the first sector, the start of track mark, etc.). By the way, all of this stuff (including the necessary track layout) can be found in ST Internals, most of which you will have to get from the BIOS listing (flpfmt()). It isn't difficult to understand or decode. Mike ----------------------------------------------------------------------- Michael Horne - KA7AXD FDI group, Tektronix, Incorporated Packet: KA7AXD@KA7AXD Phone: 503-626-2647 h, 627-6796 w Domain: mhorne@tekfdi.fdi.tek.com CSNET: mhorne@tekfdi.fdi.tek.csnet@csnet-relay.csnet UUCP: {decvax,hplabs,hp-cd,reed,uw-beaver}!tektronix!tekfdi!mhorne -----------------------------------------------------------------------
cmcmanis@sun.UUCP (02/05/87)
In article <756@tekfdi.TEK.COM>, mhorne@tekfdi.TEK.COM (Mike Horne) writes:
< I (unfortunately) haven't had the opportunity to finish writing a
< program to do my own formatting, but the concept is simple. The floppy
< controller can do a 'track write' which is used to format a complete
< track with a sector layout. One would only need layout a typical track
< in memory (with all of the necessary inter-rec gaps, etc., in place),
< tell DMA where the buffer is, then give a 'track write' command to the
< floppy controller. That is all the TOS routine is doing, but you could
< play around with the inter-record spacing, interleave, etc. You could
< easily get 10 sectors on a track, and possible 11 if you get rid of
< a large chunk of the end-of-track gap, though you get closer and closer
< to losing your data in the bit-pit if there isn't enough gap (e.g. when
< writing to the last sector on the track, you could wipe out the first
< sector, the start of track mark, etc.).
<
< By the way, all of this stuff (including the necessary track layout)
< can be found in ST Internals, most of which you will have to get from
< the BIOS listing (flpfmt()). It isn't difficult to understand or decode.
<
< Michael Horne - KA7AXD FDI group, Tektronix, Incorporated
[I have heard the ST uses the WD controller for floppies if this is not
true ignore this]
As Mike indicates doing a track write is the standard way to format
a floppy controlled by a West. Digital controller. There are some
"magic cookie" bytes that are in the track data that get translated
into header bytes and sector crcs and the like. And you can play around
with the gap's however you will not be able to get reliable operation
with 10 or 11 sectors. The gaps protect the data in subsequent sectors.
When a sector is written the last thing the disk does is write the
sector crc, the write heads are then switched "off". If the gap is
too short they aren't off before they pass over the address mark of
the next sector and they change it. Poof! Unreadable sector. It is
fun to play with, I wrote a program to do so when I wrote a CP/M
bios for WD controller. You end up learning a whole lot about the
mechanics of a disk drive, and the timing constraints.
--
--Chuck McManis
uucp: {anywhere}!sun!cmcmanis BIX: cmcmanis ARPAnet: cmcmanis@sun.com
These opinions are my own and no one elses, but you knew that didn't you.
lsr@apple.UUCP (02/06/87)
In article <756@tekfdi.TEK.COM> mhorne@tekfdi.UUCP (Mike Horne) writes: >Basic is the pits! In fact, he kinda liked it! But look at the disk >reads and writes for the ST that were quoted in the article. Are you kidding? The May 1986 BYTE has benchmarks of the Mac, ST, and Amiga done by Bruce Webster. These were done in a variety of languages (on the ST, he used Personal Pascal, and Hippo C), picking the best times for each machine. The disk tests were: write 64 512-byte blocks, read the blocks sequentially, and read the blocks randomly. The results were (in seconds): Mac Plus ST Amiga write 2.3 30.3 7.3 seq read 1.1 15.9 5.1 random read 4.6 14.2 17.8 The ST came out best when run on performance benchmarks (sieve, quicksort), however.
jafischer@watrose.UUCP (02/09/87)
>The disk tests were: write 64 512-byte blocks, read the blocks >sequentially, and read the blocks randomly. > >The results were (in seconds): > Mac Plus ST Amiga >write 2.3 30.3 7.3 >seq read 1.1 15.9 5.1 >random read 4.6 14.2 17.8 Of course you realize that these speeds _can_ be greatly improved upon. Generic benchmarks are really annoying, you know? The best bench- mark, in my opinion, would take the best possible speeds for each machine. And it's not like you have to roll up your sleeves and hijack the FPC either. Some guy I know wrote his own benchmark doing exactly the above, but with a 32K buffer, and the time went down to somewhere in between the Mac Plus and the Amiga. -- - Jonathan Fischer (jafischer@watrose) or: watmath!watrose!jafischer or: jafischer%watrose@waterloo.csnet or: jafischer%watrose@waterloo.csnet@csnet-relay.arpa
kgschlueter@watrose.UUCP (02/09/87)
Yet another Caveat to these disk benchmarks: Assuming that these are the ones from BYTE, the Amiga times were done under release 1.1 of the system software. Release 1.2 (which has been out for a couple of months) has faster disk I/O.
dillon@CORY.BERKELEY.EDU.UUCP (02/10/87)
> Of course you realize that these speeds _can_ be greatly improved >upon. Generic benchmarks are really annoying, you know? The best bench- >mark, in my opinion, would take the best possible speeds for each machine. >And it's not like you have to roll up your sleeves and hijack the FPC either. >Some guy I know wrote his own benchmark doing exactly the above, but >with a 32K buffer, and the time went down to somewhere in between the Mac >Plus and the Amiga. The best possible speeds to read and write to the floppy for all three machines is the theoretical maximum for the disk mechanism. Generic benchmarks are meant to show the overhead of the DOS in question. Still, the BYTE benchmark results are seriously lacking. Properly, the benchmarks should be run for various buffer sizes (512, 4K, 32K) and on both a blank disk and a well-used disk, with all the results published. -Matt