cwilson@NISC.SRI.COM (Chan Wilson) (03/10/90)
How do those fast disk copiers work? I mean, I go to read $xx number of blocks from a floppy, and it takes its own sweet time about it. What I'm really asking for here, is a slice of code that will accept a starting block #, and an ending block #, and go fetch. Fast. --Chan ................ Chan Wilson -- cwilson@nisc.sri.com <!> I don't speak for SRI. Janitor/Architect of comp.binaries.apple2 archive on wuarchive.wustl.edu "And now, the penguin on top of the television set will explode." ................
BRL102@psuvm.psu.edu (Ben Liblit) (03/11/90)
In article <14076@fs2.NISC.SRI.COM>, cwilson@NISC.SRI.COM (Chan Wilson) asks: > >How do those fast disk copiers work? First of all, try to do all of your disk access at once. It takes time for a floppy disk drive's motor to get itself going, and all your program can do in the meantime is wait. Once you've got the motor cranked up, try to get everything you'll need in rapid sequence, so that the motor doesn't have time to shut off after one read before being turned on again for the next. Secondly, write tight code. This is particularly applicable to assembly language routines, but everyone knows the kind of effect an efficient innermost loop can have. As a brief example, suppose one wishes to perform some process x times. Rather than start from zero and count up to x, it is more efficient to start from x and count down to zero. Decrementing the loop index to zero will set automatically set the Z flag, allowing for BEQ/BNE branching. The count-up technique requires the use of a comparison (CMP, CPX, CPY) to set the C (carry) flag before branching may be performed. The extra cycles this requires add up quickly. Many other speed-gainers exist. Tight code means reduced use of subroutines. Modular design is nice, but when speed is of the essence, straightforward sequential execution is the only way to go. Efficient use of registers is also important, especially a machine that has so few. Not only are register operands faster than those that interact with memory, they take of fewer bytes of memory as well. The same applies to zero page use. Zero page access is faster than access to the rest of memory, so use it to store quick-access data that won't fit into a register. Zero page is also more space-efficient, as only one byte is required to specify an address rather than two. There is a last, rather creative tactic that I believe (I could be wrong) was used in the old Locksmith Fast Disk Backup. Apparently, Locksmith is intelligent enough to realize that since it is copying every sector off of a given track, it doesn't matter what which one it starts with. As soon as it reads a valid sector, *any* valid sector, it puts it into memory and marks it as read. It keeps this up until it has read the entire track (if it takes *too* long, it does know to flag an error). The advantage of this is that, when a drive is turned on, the location of the read head with respect to the sectors on a disk is basically random. Suppose a copy program insists on reading sector 16 ($F) first. If the drive reaches proper reading speed just *after* sector 16 has rotated past the read head, it will take an entire revolution of the disk before data retrieval can begin. By simply starting to read wherever the read head happens to be, a copy program can speed up operations considerably. Unfortunately, though, this approach is difficult to apply to most other situations. Hope this helps.... Ben Liblit BRL102 @ psuvm.bitnet -- BRL102 @ psuvm.psu.edu "Fais que tes reves soient plus longs que la nuit."
fadden@cory.Berkeley.EDU (Andy McFadden) (03/11/90)
In article <14076@fs2.NISC.SRI.COM> cwilson@NISC.SRI.COM (Chan Wilson) writes: >How do those fast disk copiers work? > >I mean, I go to read $xx number of blocks from a floppy, and it takes >its own sweet time about it. > >What I'm really asking for here, is a slice of code that will accept a >starting block #, and an ending block #, and go fetch. Fast. Suggestions for 5.25" disks: 1) Use something like the ProDOS routines, which decode the raw track data on the fly. That is, it undoes the 6&2 disk encoding while it's reading the data fromt he disk. You will not get the blocks in order, but reassembly is relatively trivial. 2) Read the entire track into memory as raw data, then decode that. There's no particular advantage to this method, except that it's easier to write and debug. Keeping the track in memory will improve things, if you have the memory to spare and aren't reading by whole tracks every time. 3) Leave the drive running. This leaves you in slow mode, but you don't have to wait for it to come back on. > Chan Wilson -- cwilson@nisc.sri.com <!> I don't speak for SRI. -- fadden@cory.berkeley.edu (Andy McFadden) ...!ucbvax!cory!fadden
stephens@latcs1.oz.au (Philip J Stephens) (03/11/90)
In article <14076@fs2.NISC.SRI.COM>, cwilson@NISC.SRI.COM (Chan Wilson) writes: > How do those fast disk copiers work? > > I mean, I go to read $xx number of blocks from a floppy, and it takes > its own sweet time about it. > > What I'm really asking for here, is a slice of code that will accept a > starting block #, and an ending block #, and go fetch. Fast. If all you are looking for is a slice of code to _read_ a floppy very fast, then you are in luck. It is possible to create an optimized reading routine (using tables and the like) that can read an entire track in a single revolution. I can't give you the code for this in this article without wasting a lot of bandwidth though! E-mail me if you're interested. Writing an entire track in a single revolution cannot be done unless you change the format of the disk, or (in the case of Locksmith), read the data from the disk in a scrambled format. > --Chan </\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\> < Philip J. Stephens >< "Many views yield the truth." > < Hons. student, Computer Science >< "Therefore, be not alone." > < La Trobe University, Melbourne >< - Prime Song of the viggies > <\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>
stephens@latcs1.oz.au (Philip J Stephens) (03/11/90)
In article <90069.141109BRL102@psuvm.psu.edu>, BRL102@psuvm.psu.edu (Ben Liblit) writes: > > There is a last, rather creative tactic that I believe (I could be wrong) was > used in the old Locksmith Fast Disk Backup. [details deleted] This is just the tip of the iceberg in regards to how Locksmith copies disks so fast. It actually pretends that the sector has been stored in a different (but similar) format to the 6&2 encoding actually used. I can't remember exactly how it interprets it as it's been ages since I peeked at the raw code. However, it does compress it back into 256 bytes (which is why it can copy so many tracks at one time), but the data is scrambled. It turns out that this scrambled data can be re-converted back into the 350 odd disk-bytes _quicker_ than the 6&2 encoding scheme; in fact, it can be done on the fly. Hence Locksmith can write at maximum speed. Why then, don't we use this alternative scheme rather than the 6&2 encoding scheme? Again, I'm not 100% certain due to time eroding my memory bank, but I think you'd have to scramble the sector in RAM before writing it, and hence you lose the speed advantage. That's probably crap, though :-) </\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\> < Philip J. Stephens >< "Many views yield the truth." > < Hons. student, Computer Science >< "Therefore, be not alone." > < La Trobe University, Melbourne >< - Prime Song of the viggies > <\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>
huang@husc4.HARVARD.EDU (Howard Huang) (03/12/90)
>How do those fast disk copiers work?
If you are reading in a whole disk to copy it and care not what order
the blocks appear in (this worked in DOS 3.3 at least):
One thing that may help is to change the interleave pattern. If you
are reading in blocks/sectors with a fast, tight assembly loop, you may
be able to change the interleave to a lower (is that the proper word?)
number.
Another thing to be sure of is that you are reading blocks in order.
Making the drive head jump from block 20 to block 600 takes time.
----------------------------------------------------------------------------
Howard C. Huang Internet: huang@husc4.harvard.edu
Sophomore Computer Science Major Bitnet: huang@husc4.BITNET
Mather House 426, Harvard College UUCP: huang@husc4.UUCP (I think)
Cambridge, MA 02138 Apple II: ftp husc6.harvard.edu
toddpw@tybalt.caltech.edu (Todd P. Whitesel) (03/12/90)
stephens@latcs1.oz.au (Philip J Stephens) writes: > This is just the tip of the iceberg in regards to how Locksmith copies disks >so fast. It actually pretends that the sector has been stored in a different >(but similar) format to the 6&2 encoding actually used. I can't remember >exactly how it interprets it as it's been ages since I peeked at the raw code. >However, it does compress it back into 256 bytes (which is why it can copy so >many tracks at one time), but the data is scrambled. > It turns out that this scrambled data can be re-converted back into the >350 odd disk-bytes _quicker_ than the 6&2 encoding scheme; in fact, it can be >done on the fly. Hence Locksmith can write at maximum speed. > Why then, don't we use this alternative scheme rather than the 6&2 encoding >scheme? Got me. I remember figuring out something similar to it trying to write an untra fast disk reader, and wondering the same thing... it could be done if we reformatted all our disks with it but that's a pain in the neck. (new FST!) Actually, scrambling/unscrambling in memory doesn't hurt as much as waiting for sectors to come by, so if you read in the whole track fast and then unscrambled it you'd still get a nice speed improvement. if you did the unscrambling while waiting for the head to step to the next track then you get even more speed. Whether you can hit the physical limit of the disk drive I don't know though. Would make an interesting demo... Todd Whitesel toddpw @ tybalt.caltech.edu
ART100@psuvm.psu.edu (Andy Tefft) (03/13/90)
All this is very interesting, but does it apply to 3.5" drives too? Took me forever to copy a 3.5" disk. Is there at least a program which will do single-drive 3.5" copies faster than Copy II+?
huang@husc4.HARVARD.EDU (Howard Huang) (03/13/90)
>All this is very interesting, but does it apply to 3.5" drives >too? Took me forever to copy a 3.5" disk. Is there at least a >program which will do single-drive 3.5" copies faster than Copy II+? You can try Photonix if you have 1MB of memory and a IIgs. Another program is DigiCopy GS, which works with any amount of memory in a gs. You can get both of these by ftp to husc6.harvard.edu. They're probably also available on the list server at Brown. Otherwise, I can mail you a copy. ---------------------------------------------------------------------------- Howard C. Huang Internet: huang@husc4.harvard.edu Sophomore Computer Science Major Bitnet: huang@husc4.BITNET Mather House 426, Harvard College UUCP: huang@husc4.UUCP (I think) Cambridge, MA 02138 Apple II: ftp husc6.harvard.edu
cwilson@NISC.SRI.COM (Chan Wilson) (03/14/90)
In article <2195@husc6.harvard.edu> huang@husc4.UUCP (Howard Huang) writes: >>All this is very interesting, but does it apply to 3.5" drives >>too? Took me forever to copy a 3.5" disk. Is there at least a >>program which will do single-drive 3.5" copies faster than Copy II+? > >You can try Photonix if you have 1MB of memory and a IIgs. Another program >is DigiCopy GS, which works with any amount of memory in a gs. You can >get both of these by ftp to husc6.harvard.edu. They're probably also >available on the list server at Brown. Otherwise, I can mail you a copy. Source, man, I want source!! >;-> Say, is the unidisk faster than the appledisk in reading things, or is just my overworked imagination? --Chan ................ Chan Wilson -- cwilson@nisc.sri.com <!> I don't speak for SRI. Janitor/Architect of comp.binaries.apple2 archive on wuarchive.wustl.edu "And now, the penguin on top of the television set will explode." ................
lesatz@alcor.usc.edu (Eric Michals) (03/15/90)
In article <2160@husc6.harvard.edu> huang@husc4.UUCP (Howard Huang) writes: >Another thing to be sure of is that you are reading blocks in order. >Making the drive head jump from block 20 to block 600 takes time. Maybe it is just my drive, but I've also noticed that reading from sector $F to sector $0 is quicker than from $0 to $F. Another thing that I found interesting is that Locksmith 5.0 fast disk backup doesn't read the sectors in order, but rather in a strange pattern. I've never taken the time to figure out why. Eric
bmarlowe@ics.uci.edu (Brett Marlowe) (03/15/90)
That "feature" is not unique to ROM 03 machines. My ROM 01 machine does it too, I stumbled across that when I first got my GS and was tracing through the ROM snooping around. Thanks for reminding me about that! However, there is one question I have: do the ROM 03 machines also have a list of firmware developers (or whatever it was) or something else? ------------------------------------------------------------------------------- Brett Marlowe | A senior in mathematics and Information bmarlowe@bonnie.ics.uci.edu | & Computer Science at the University of ma3022034@vmsc.oac.uci.edu | California, Irvine. eapu018@orion.oac.uci.edu | I said it, so I'm responsible for it not Compuserve: 73247,1640 | them! So There!! ------------------------------------------------------------------------------- Apple // Forever!! ------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------- Brett Marlowe | A senior in mathematics and Information bmarlowe@bonnie.ics.uci.edu | & Computer Science at the University of ma3022034@vmsc.oac.uci.edu | California, Irvine.
cs122aw@ux1.cso.uiuc.edu (Scott Alfter) (03/15/90)
In article <8613@chaph.usc.edu> lesatz@alcor.usc.edu (Eric Michals) writes: >Maybe it is just my drive, but I've also noticed that reading from >sector $F to sector $0 is quicker than from $0 to $F. > >Another thing that I found interesting is that Locksmith 5.0 fast disk >backup doesn't read the sectors in order, but rather in a strange >pattern. I've never taken the time to figure out why. What you're talking about is interleave. Yes, that property of hard disks that GS owners are constantly playing with is a property of every random-access mass-storage medium ever developed. The computer will read in a bunch of data, encoded 6-and-2 on the disk, and take some time to decode it into the familiar 256 bytes per sector. (342 disk bytes are required to store one sector; the reasons are beyond the scope of this post.) In that time, another sector is whizzing by the drive head, so if it was the next sector you needed, the disk will have to make a full spin before the computer can catch it again. Interleaving lets you scatter sectors on a disk so that when the computer is done processing one sector, the sector coming up on the disk is the next one the computer will read; the one that went under the head while the computer was chewing the numbers from the last sector is not what DOS is looking for, so the disk doesn't have to make a full spin to get the next sector. Locksmith is set up to take advantage of interleave to read sectors in the most efficient manner possible. Scott Alfter------------------------------------------------------------------- Internet: cs122aw@ux1.cso.uiuc.edu _/_ Apple II: the power to be your best! alfter@mrcnext.cso.uiuc.edu/ v \ saa33413@uxa.cso.uiuc.edu ( ( A keyboard--how quaint! Bitnet: free0066@uiucvmd.bitnet \_^_/ --M. Scott, STIV
gwyn@smoke.BRL.MIL (Doug Gwyn) (03/15/90)
In article <8613@chaph.usc.edu> lesatz@alcor.usc.edu (Eric Michals) writes:
-Maybe it is just my drive, but I've also noticed that reading from
-sector $F to sector $0 is quicker than from $0 to $F.
That's just the interleaving issue in a thin disguise.
-Another thing that I found interesting is that Locksmith 5.0 fast disk
-backup doesn't read the sectors in order, but rather in a strange
-pattern. I've never taken the time to figure out why.
Probably it interleaves its access to speed things up.
stephens@latcs1.oz.au (Philip J Stephens) (03/15/90)
In article <12350@smoke.BRL.MIL>, gwyn@smoke.BRL.MIL (Doug Gwyn) writes: > In article <8613@chaph.usc.edu> lesatz@alcor.usc.edu (Eric Michals) writes: > > -Another thing that I found interesting is that Locksmith 5.0 fast disk > -backup doesn't read the sectors in order, but rather in a strange > -pattern. I've never taken the time to figure out why. > > Probably it interleaves its access to speed things up. To be more precise, it reads the _physical_ sectors in the order that they appear on the disk, which translates as a different pattern of _logical_ sectors. The interleaving of a standard DOS 3.3 disk runs "backwards" so to speak; that is, the distance between sector $F and sector $E (in that order) is greater than the distance between sector $E and $F. Thus if you read the logical sectors backward, you have more time to catch each one, resulting in a _faster_ response time! (Reading forwards causes DOS to miss the next sector, and so it ends up waiting a full revolution for each sector). P.S. A few people have requested the source code (assembly) for fast reading of sectors, using a 5 1/4 inch drive. I will have this source in my possession come Monday, so if there is enough interest I may post it on comp.apple2.binaries (or whatever it's called). I will try and document it enough so that the technically inclined can understand how it works. </\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\> < Philip J. Stephens >< "Many views yield the truth." > < Hons. student, Computer Science >< "Therefore, be not alone." > < La Trobe University, Melbourne >< - Prime Song of the viggies > <\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>
toddpw@tybalt.caltech.edu (Todd P. Whitesel) (03/15/90)
cwilson@NISC.SRI.COM (Chan Wilson) writes: >Say, is the unidisk faster than the appledisk in reading things, or is >just my overworked imagination? Its your imagination. The UniDisk has its own controller (a 2 mhz 65c02) and uses the SmartPort packet protocol to communicate with the host. The packet protocol is slow because (a) it works at disk ][ speed which is half as fast as raw 3.5 (b) it has to read the disk block and then transmit it through the port which takes even more time so you literally cannot get respectable speeds from a unidisk. 4:1 interleaves are optimal, and that is pretty sad... Why didn't Apple (a) make smartport as fast as the 3.5 instead? (b) give the unidisk controller enough RAM to do 1:1 interleave track reads and caching? (come on, 16K (2 6264's) is cheap!) Because they haven't bothered to improve anything once it is out the door... EXCEPT the 1 meg GS. The Unidisk could have become the 3.5 of choice but no... it is still an inefficient double-buffering drive controller. If I could figure out how to modify one and make it killer for cheap, I'd make gobs of bux selling upgrade kits. except I shouldn't be doing that, Apple should... Todd Whitesel toddpw @ tybalt.caltech.edu
wack@udel.edu (Andrew Wack) (03/15/90)
In article <1990Mar15.142012.15985@spectre.ccsf.caltech.edu> toddpw@tybalt.caltech.edu (Todd P. Whitesel) writes: >cwilson@NISC.SRI.COM (Chan Wilson) writes: > >>Say, is the unidisk faster than the appledisk in reading things, or is >>just my overworked imagination? > >Its your imagination. The UniDisk has its own controller (a 2 mhz 65c02) >and uses the SmartPort packet protocol to communicate with the host. The [edit] >Why didn't Apple > > (a) make smartport as fast as the 3.5 instead? > (b) give the unidisk controller enough RAM to do 1:1 interleave > track reads and caching? (come on, 16K (2 6264's) is cheap!) > >Because they haven't bothered to improve anything once it is out the door... >EXCEPT the 1 meg GS. The Unidisk could have become the 3.5 of choice but The answer is they did upgrade, in the usual apple fashion, through software! System Disk 5.0 reads 3.5 disks at a 1:1 interleave (even though the disk is still set up with 2:1 interleave) by buffering tracks. Thus they made the (cheaper) appledisk as fast as the unidisk could ever be for free! Granted this doesn't help non GS owners. And if you want slow floppy access, try a 3.5 disk on an IBM sometime...ugghh. -- ------------------------------------------------------------------------------- Andrew Wack Gravitation cannot be held responsible Internet : wack@udel.edu for people falling in love -- Albert Einstein
toddpw@tybalt.caltech.edu (Todd P. Whitesel) (03/16/90)
wack@udel.edu (Andrew Wack) writes: [ I asked why the Unidisk didn't get upgraded to be a really nice 3.5 drive ] >The answer is they did upgrade, in the usual apple fashion, through software! >System Disk 5.0 reads 3.5 disks at a 1:1 interleave (even though the disk >is still set up with 2:1 interleave) by buffering tracks. Thus they made >the (cheaper) appledisk as fast as the unidisk could ever be for free! >Granted this doesn't help non GS owners. ... Or UniDisk owners. They still use the Packet protocol, and there isn't a thing 5.0 can do about that. You can, however, lobotomize a Unidisk and make it into an Apple 3.5, but you can't chain anything after it. The best solution would be to upgrade the Unidisk card to be a full Disk Port card (i.e. IIGS Disk Port equivalent, "the only floppy controller card you'll ever need") and phase out the UniDisk. Todd Whitesel toddpw @ tybalt.caltech.edu
jason@madnix.UUCP (Jason Blochowiak) (03/16/90)
In article <14076@fs2.NISC.SRI.COM> cwilson@NISC.SRI.COM (Chan Wilson) writes: >How do those fast disk copiers work? >I mean, I go to read $xx number of blocks from a floppy, and it takes >its own sweet time about it. Well, the blocks (on floppies) are composed of two sectors. Each 5.25" disk has 35 tracks (there are some with more or less, but that's how many standard Apple drives & OS'es use), and each track has 16 sectors. The sectors each consist of an address field, and a data field. The address field indicates which track & sector this is, and the data field is the encoded data for that sector. Each field has a prologue, some data, a checksum, and an epilogue. When a disk driver goes to read a specific sector, it seeks to the specified track, and looks at what's passing under the head, until it either times out or sees an address prologue. At this point, it reads the address field's contents, decodes, and verifies (using the checksum) it, and then makes sure that the proper epilogue is there. It then checks to see if the sector number in the address field matches what it's looking for - if it is, then it waits for the data prologue, reads, decodes, and verifies the data, and makes sure that the epilogue is in place. If the sector in the address field doesn't match, the disk read decrements a safety count, and goes back to reading the address field (actually, waiting for one to come under the head). So, when a ProDOS (or GS/OS) 5.25" driver goes to read a block, it has to 1) Wait for the first sector to come under the head, 2) Read/decode it, 3) Wait for the second sector to come under the head, 4) Read/decode that one. Now, depending on how things are set up exactly, the "wait for sector x to come under the head" may take a really long time (relative to the speed at which these toys generally operate). Then, the problem is to avoid waiting for the sectors to come under the head. There are two ways of going about this: 1) Constantly read and decode whatever comes under the head, or 2) Read an entire track image into memory and then decode it. Both of these work, as the sectors are laid out (basically) one next to another. I can't remember if the first way is possible on a 1Mhz machine - you may end up reading every other sector due to timing constraints. So, the really fast disk copiers generally read in a track, decode it, and then re-write it. Some of them don't even decode the track - they just read a bunch of data, figure out where the track starts and ends (using methods that I'm not going to describe here), and then dump the track to the target disk. Due to the way the data is encoded (which is a little bit weird, as you're not allowed to put more than two 0 bits next to each other on the Disk ][), it's more memory efficient to read, decode, write (assuming you're going in chunks - read,decode, read,decode, ..., encode,write, encode,write ...) All of the stuff I've described here has to do with 5.25" disks. It's what I'm most familiar with on that level - by the time the 3.5" drives had come to the Apple // world, I had lost interest at this level. But, Apple's 3.5" drives are pretty much like the 5.25" drives, with enough differences to make life interesting ;) >What I'm really asking for here, is a slice of code that will accept a >starting block #, and an ending block #, and go fetch. Fast. Well, you'd probably have to do block->sector/track conversions, but that's easy enough (with 8 blocks/track it's pathetically easy). As for the rest of the code, most of what I know I got from fiddling around and from reading _Beneath Apple DOS_, from Quality Software. They also wrote _Beneath Apple ProDOS_, which might be a bit more of what you're looking for, but I'm almost certain that the low-level info contained in _BAP_ is basically the same as in _BAD_, perhaps with more of an eye towards the ProDOS way of doing things. I'm almost positive that they include a track-read routine, and if they don't, and programmer with more than 3 tufts of hair left to pull out while writing the routines should be able to manage without any great difficulties. > Chan Wilson -- cwilson@nisc.sri.com <!> I don't speak for SRI. -- Jason Blochowiak - jason@madnix.UUCP or, try: astroatc!nicmad!madnix!jason@spool.cs.wisc.edu "Education, like neurosis, begins at home." - Milton R. Saperstein
cyliao@eng.umd.edu (Chun-Yao Liao) (03/19/90)
In article <1990Mar15.173750.17835@spectre.ccsf.caltech.edu> toddpw@tybalt.caltech.edu (Todd P. Whitesel) writes: >... Or UniDisk owners. They still use the Packet protocol, and there isn't a >thing 5.0 can do about that. You can, however, lobotomize a Unidisk and make >it into an Apple 3.5, but you can't chain anything after it. > >The best solution would be to upgrade the Unidisk card to be a full Disk Port >card (i.e. IIGS Disk Port equivalent, "the only floppy controller card you'll >ever need") and phase out the UniDisk. > >Todd Whitesel >toddpw @ tybalt.caltech.edu Good point, so is there any GS + UniDisk owner willing to sell me the UniDisk 3.5 to me at a ridiculous low price? -- |I want Rocket Chip 10 MHz, Z-Ram Ultra II, UniDisk 3.5 | cyliao@wam.umd.edu | |I want my own NeXT, 50MHz 68040, 64Mb RAM, 660Mb SCSI, | Chun Yao Liao | | NeXT laser printer, net connection. | Accepting Donations!| /* If (my_.signature =~ yours) coincidence = true; else ignore_this = true; */
NU156266@NDSUVM1.BITNET (03/21/90)
There have been several previous posts on fast reading of floppys... On programs like locksmith and some other fast copiers... they read a complete image of the disk tracks into memory without decoding them at all. However to store the complete tracks into memory they use the access time to get the disk drives up to speed and the drive arm movement delays to manage memory when storing the previously read track images. I don't know of any such routines for the 3.5" disk drives or the hard disks. You'd have to go buy an Applied Engineering scsi cache card for that. I don't think you'd be able to implement such code using the prodos MLI calls to fastread blocks but under DOS 3.3 you could do so by reading the sectors in a backwards order Becky nu156266 @ ndsuvm1