mwm@eris.UUCP (02/03/87)
In article <2174@batcomputer.tn.cornell.edu> braner@batcomputer.UUCP (braner) writes: >As for background reading of the disk: Usually a program NEEDS the data >it requested from the disk drive BEFORE it can do anything else, so >no amount of multi-tasking will speed that program up while it's waiting. >On the other hand, SOME specially-designed programs sometimes read the >disk BEFORE the data is needed, i.e. invite an overlap of reading the >disk and some other processing. On the ST that is possible too, since >the disk I/O is done via the DMA chip, which (as far as I know) CAN take >a command and then execute it independently. It does steal CPU memory >cycles for that, though (or does it? experts, step up?). DMA doesn't steal CPU cycles; it steals bus cycles. The performance impact can run from little or nothing with double-speed, dual-ported memory, to very significant (as in "why bother?") with stock memory and a memory-intensive processor/application pair. Of course, the real win in most applications isn't "read-ahead", but "write-behind." On Unix (and AmigaDOS), a task issues a write, the OS catches & allocates a block on disk, returns ok, then starts the physical I/O. This is a win because the application doesn't have to do it by hand, and the OS can do it more effectively than it can do read-ahead. The same effect can be achieved on a single-tasking system with a write-through cache, but I've seen very few systems do that. Most people just do ram-disk, which has similar performance (for the limited set of things in it), and is much easier to do. <mike