pete@wlbr.EATON.COM (Pete Lyall) (01/01/70)
In article <989@pixar.UUCP> bp@pixar.UUCP (Bruce Perens) writes: >The way I'd like to see an OS-9 disk cache implemented would be like the >SDOS cache, but with one more wrinkle - use all of the memory not being >used by processes, and when a process wants more memory, flush some >cache buffers and surrender that memory to the process. .... What a resource management nightmare that would be! Could you imagine having to flush the cache every time a new path was opened? How about when f$all64 required a new block? Even simple, non-executing f$load's ? Seems you'd spend as much time allocating & deallocating RAM as doing anything else. I suspect that it may be better to empirically determine the optimal cache size for your system, and then fix it at that size at system startup. -- Pete Lyall Usenet: {trwrb, scgvaxd, ihnp4, voder, vortex}!wlbr!pete Compuserve: 76703,4230 (OS9 Sysop) OS9 (home): (805)-985-0632 (24hr./1200 baud)
bp@pixar.UUCP (08/11/87)
The way I'd like to see an OS-9 disk cache implemented would be like the SDOS cache, but with one more wrinkle - use all of the memory not being used by processes, and when a process wants more memory, flush some cache buffers and surrender that memory to the process. It wouldn't be too hard to implement on Coco III if the cache blocks were organized into segment-sized units. bp
bp@pixar.UUCP (08/16/87)
In article <1114@wlbr.EATON.COM> Pete Lyall (wlbr!pete) writes, following up my posting <989@pixar.UUCP>: > What a resource management nightmare that would be! ... > Seems you'd spend as much time allocating & deallocating RAM > as doing anything else. Well, I started out thinking that you could hand memory between the cache and the rest of the system in reasonably large clusters - say 8K at a time, keeping them contiguous so that all of the RAM below a (moving) boundary address is owned by the system and all above is owned by the cache. The system gets 8K from the cache, and allocates from that until it's used up. Only when that 8K is used up or it needs a chunk larger than 8K would it ask the cache to move the boundary up. Only when it had more than 8K contiguous free space below the boundary would it surrender memory to the cache. That would reduce the overhead for SMALL memory allocations, but would introduce a `hiccup' once in a while, sort of like the hiccup Lisp gets when it does garbage collection. Memory fragmentation in the system would result in the boundary creeping ever higher. It would also be slow to start up a new process if a lot of dirty blocks had to be written out to get the memory. It would be even slower if those same blocks just flushed from cache had to be read in again to start the process! Back to the drawing board... bp