rowley@orville.nas.nasa.GOV (Karl Rowley) (04/20/88)
Does anyone know for sure that Turbodos delays writes to disk? It seems hard to believe that someone with enough knowledge to write Turbodos would make such a mistake. How much a delayed write cache would really buy on the ST is a good question. In a lot of cases it may not buy much over a write-through cache. The ideal disk caching program for the ST would: (1) provide a write-through cache, (2) be configurable to grab a user-specified amount of memory at boot time, (3) use a FIFO algorithm for allocating cache space, (4) allow the user to on-the-fly turn caching for any device on or off, and (5) interface to the rest of the system through the rwabs() vector. Such a capability should be a standard part of the ST system software (i.e. provided by Atari). Karl Rowley ames!orville.nas.nasa.gov!rowley rowley@orville.nas.nasa.gov "Any opinions expressed are my own."
apratt@atari.UUCP (Allan Pratt) (04/21/88)
From article <8804192256.AA06919@orville.nas.nasa.gov>, by rowley@orville.nas.nasa.GOV (Karl Rowley): > How much a delayed write cache would really buy on the ST is a good > question. In a lot of cases it may not buy much over a write-through > cache. But there is a significant place where it buys a lot -- copying lots of files to a disk. Think about it: GEMDOS is constantly reading the directory of the disk looking for an empty slot, creating a file in that slot (zero length), reading the FAT looking for space, writing the file's data (can't help that), then writing the FAT and the new file size to the directory. For the next file, the cycle repeats. If you could save the writes to the FAT and directory until all the files are copied, you would save quite a bit. Remember that the FAT and root directory are at the beginning of the disk, so the head is seeking back and forth all this time. With cached reads AND DELAYED WRITES you save the seek time as well. Unfortunately, you don't know when the whole operation is complete. That's why it's dangerous. It gets much worse with removable media, of course. Apple's idea is to leave eject buttons off their drives. GEMDOS's idea (and MS-DOS's) is to keep the disk as up-to-date as possible, and to keep inconsistencies to a minimum time window. (Orphaned clusters and data written before the FAT aren't inconsistencies, they're lost data. FAT written before data or directory entry written before FAT is an inconsistency.) If we could retrain people to pop disks only when the light goes out, we would have a 5-second write-delay cache with no hazard, but that kind of retraining is tough: after all, if you power up with no disk in the drive, the light stays on FOREVER (on 1040 and Mega with a bootable hard drive). Not a good foundation for retraining. On the other hand, that might be a start... ============================================ Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. reflect those of Atari Corp. or anyone else. ...ames!atari!apratt
rowley@ORVILLE.NAS.NASA.GOV (Karl Rowley) (04/29/88)
I have a couple of programming questions related to building a hard disk cache. To build such a cache at the rwabs level, what is the cleanest to do it? Intercepting all BIOS traps and looking for rwabs calls would be one way. My MWC manual lists a variable named "hdv_rw" under System Variables. This variable is said to point to the hard disk read/write routine. Does anybody know what parameters the hard disk read/write routine expects when called? Is this routine used by all calls to rwabs? Karl Rowley ames!orville.nas.nasa.gov!rowley rowley@orville.nas.nasa.gov "Any opinions expressed are my own."
wes@obie.UUCP (Barnacle Wes) (05/02/88)
In article <8804282151.AA29462@orville.nas.nasa.gov>, rowley@ORVILLE.NAS.NASA.GOV (Karl Rowley) writes: > I have a couple of programming questions related to building a hard disk cache. > > To build such a cache at the rwabs level, what is the cleanest to do it? > Intercepting all BIOS traps and looking for rwabs calls would be one way. It would slow down ALL BIOS calls as a side-effect. > My MWC manual lists a variable named "hdv_rw" under System Variables. This > variable is said to point to the hard disk read/write routine. Does anybody > know what parameters the hard disk read/write routine expects when called? > Is this routine used by all calls to rwabs? Yes, this vector is used by all (GEMDOS) reads/writes from/to the hard disk. The easiest way I know of to find out how the `rwabs' call works is to look at the source for Moshe Braners original ramdisk program for the ST. Is that still available on the net? I've got an extensively-hacked copy of it around here somewhere.... -- /\ - "Against Stupidity, - {backbones}! /\/\ . /\ - The Gods Themselves - utah-cs!uplherc! / \/ \/\/ \ - Contend in Vain." - sp7040!obie! / U i n T e c h \ - Schiller - wes
mem@zinn.MV.COM (Mark E. Mallett) (05/03/88)
In article <1042@atari.UUCP>, apratt@atari.UUCP (Allan Pratt) writes:
< From article <8804192256.AA06919@orville.nas.nasa.gov>,
< by rowley@orville.nas.nasa.GOV (Karl Rowley):
<
< > How much a delayed write cache would really buy on the ST is a good
< > question. In a lot of cases it may not buy much over a write-through
< > cache.
<
< If you could save the writes to the FAT and directory until all the
< files are copied, you would save quite a bit. Remember that the FAT and
< root directory are at the beginning of the disk, so the head is seeking
< back and forth all this time. With cached reads AND DELAYED WRITES you
< save the seek time as well.
<
< Unfortunately, you don't know when the whole operation is complete.
< That's why it's dangerous. It gets much worse with removable media, of
< course.
well, there's another answer somewhere between write-through and write-back
cache, and that is the idea of having some control over when updates are
posted to disk. This works especially well when controlled judiciously.
four or five years ago, i had to do an MS-DOS filesystem implementation.
the way I did it was that all i/o went through a buffer cache. i arranged
to be able to specify how the buffers were to be released to the cache, in
one of three ways: 1: buffer was unmodified; 2: mark buffer as modified; 3:
write the buffer immediately. another function would write all "dirty"
buffers. incremental file allocation was done such that the FATs were
released with the second mode. With this scheme (and a little other
bookkeeping), it was possible to selectively sweep the cache (thereby
updating the cache) only at major checkpoints, such as the final update to
the directory entry. i used variations on this scheme in later systems, as
well.
Another part of an answer (which I have combined with the above) is to
do preallocation. Here, whenever a file is about to undergo automatic
extension, the extension can be done in an amount greater than a single
cluster (under control of some fixed or variable parameter). this works
especially well when contiguous-best-try is used. at file close time
(again, along with some bookkeeping), the extra clusters can be released.
must less wear and tear on the FAT, better data organization.
-mm-
--
Mark E. Mallett PO Box 4188/ Manchester NH/ 03103
Bus. Phone: 603 645 5069 Home: 603 424 8129
uucp: mem@zinn.MV.COM (...decvax!elrond!zinn!mem or ...sii!zinn!mem)
BIX: mmallett
apratt@atari.UUCP (Allan Pratt) (05/05/88)
From article <307@zinn.MV.COM>, by mem@zinn.MV.COM (Mark E. Mallett): > well, there's another answer somewhere between write-through and write-back > cache, and that is the idea of having some control over when updates are > posted to disk. > > [Important stuff deleted; go read the original article if you care.] Please! GEMDOS is not so foolish as to write to the disk every time a cluster is added to a file. Buffers in the cache are ALWAYS tagged in the "mark as dirty" mode when written to. Then, if the operation doing the writing wants it updated immediately, it says so. However, GEMDOS always flushes its whole cache when you close a file. This is to update the FATs, directory, and any data sectors in the cache. Also, when you do most directory operations (rename, create, delete) the directory sector is written immediately. This introduces inefficiency when you are (for example) deleting a bunch of files. Rather than write the FAT and directory sectors after each delete, a delayed-write cache could do the whole sequence of deletes in RAM, and write them only once. The trouble, as I mentioned in my original posting, is that you can't tell when a series of deletes is finished. Delayed writes are hazardous to your disks, and especially to your floppies. Can anybody tell me categorically that TurboDOS does or does not delay writes? For instance, can you give me a recipie for demonstrating and reproducing the delayed-write phenomenon? ============================================ Opinions expressed above do not necessarily -- Allan Pratt, Atari Corp. reflect those of Atari Corp. or anyone else. ...ames!atari!apratt