tim@efi.com (Tim Maroney) (12/18/90)
The message hasn't arrived here yet, but I received a forwarded message expressing skepticism that the new DayStar SCSI cache card could actually speed up compiles. The evidence submitted was essentially, "After all, try turning up the file system cache on your Mac and watch the compiles go slower!" This is a true and remarkable fact, but it's solely because the FS cache uses a sequential search algorithm that degrades severely for any cache greater than 64K or 128K. There has been a lot of discussion of the issue on MACAPP.TECH$ lately. A smart caching algorithm, especially given dedicated hardware, has the potential to vastly outperform the brain-dead Apple algorithm. (And the real moral of the story is -- don't set your cache any higher than 128K, no matter how much RAM you have.)
lsr@Apple.COM (Larry Rosenstein) (12/19/90)
In article <1990Dec18.011129.6432@efi.com> tim@efi.com (Tim Maroney) writes: >the compiles go slower!" This is a true and remarkable fact, but it's >solely because the FS cache uses a sequential search algorithm that >degrades severely for any cache greater than 64K or 128K. There has I heard that this was caused by a bug, rather than a dumb algorithm, and that it has been fixed in System 7. -- Larry Rosenstein, Object Specialist Apple Computer, Inc. 20525 Mariani Ave, MS 3-PK Cupertino, CA 95014 AppleLink:Rosenstein1 domain:lsr@Apple.COM UUCP:{sun,voder,nsc,decwrl}!apple!lsr
keith@Apple.COM (Keith Rollin) (12/19/90)
In article <1990Dec18.011129.6432@efi.com> tim@efi.com (Tim Maroney) writes: >The message hasn't arrived here yet, but I received a forwarded message >expressing skepticism that the new DayStar SCSI cache card could >actually speed up compiles. The evidence submitted was essentially, >"After all, try turning up the file system cache on your Mac and watch >the compiles go slower!" This is a true and remarkable fact, but it's >solely because the FS cache uses a sequential search algorithm that >degrades severely for any cache greater than 64K or 128K. There has >been a lot of discussion of the issue on MACAPP.TECH$ lately. A smart >caching algorithm, especially given dedicated hardware, has the >potential to vastly outperform the brain-dead Apple algorithm. > >(And the real moral of the story is -- don't set your cache any higher >than 128K, no matter how much RAM you have.) Actually, the problem isn't with the linear search performed by the RAM cache. As I understand it, the problem was that cached blocks got put on a free block chain when the file they belonged to was closed. However, those blocks were never recovered if the file was reopened. With something like a development system, where files are being opened and closed all the time, you gained no benefit from the cache, but suffered from all of the overhead. This would be true, though to a lesser extent, even with a better cache searching algorithm. The solution, as it will be implemented in System 7.0, is to recover blocks from the free chain if the file they belong to is re-opened. To report the results of a very unscientific experiment, this allowed someone to compile the MacApp library on a IIci under 7.0b1 in the same time it took me to compile it under 6.0.5 on a IIfx. Also, while this subject was the topic of discussion on MacApp.Tech$ lately, it was not true that EVERYONE who posted results reported slower times when using a RAM cache. It was pretty true that larger caches (say 512K or larger) created more cache overhead and contributed to the problem more greatly. In my own experiments a couple of years ago, I found that 128K, 192K, or 256K worked best for me. -- ------------------------------------------------------------------------------ Keith Rollin --- Apple Computer, Inc. --- Developer Technical Support INTERNET: keith@apple.com UUCP: {decwrl, hoptoad, nsc, sun, amdahl}!apple!keith "Argue for your Apple, and sure enough, it's yours" - Keith Rollin, Contusions