rpk@wheaties.ai.mit.edu (Robert Krajewski) (08/03/90)
Does anybody know, or have an intelligent guess, as to how Windows decides when to discard resources or discardable global memory blocks when there is a lot of memory ? In other words, assume you've got a lot of memory to work with, and you're the Windows memory manager. When do you start discarding things: . When the total size of the task's discardable memory reaches some magic number ? . In enhanced mode, after you take a page fault while executing arbitrary task code ? . In enhanced mode, after you take a page fault to satisfy a memory request ? . Or, as before, when you simply run close to the memory limit ? The motivation for this is to use discardable memory as a cache to something big and slow, like a CD-ROM. (Really big and really slow.) I've seen CD-ROM applications on the Mac that benefitted from a 3Mbyte disk cache (using the standard one in the Control Panel). The cool thing about using discardable memory blocks is that it's a lot more adaptive, and the caching strategy can be related at a level closer the application's purposes. Given the difference between hard disk and CD-ROM access times, we don't even mind if we take a page fault to get the cached memory. -- --- Robert P. Krajewski Internet: rpk@ai.mit.edu ; Lotus: rkrajewski@ldbvax.lotus.com