chuckp@ncr-fc.FtCollins.NCR.com (Chuck Phillips) (01/04/90)
>> In article <129799@sun.Eng.Sun.COM> jpayne%flam@Sun.COM (Jonathan >> Payne) writes: >> I thought paging algorithms were geared towards sequential access, In article <PCG.90Jan3170738@thor.cs.aber.ac.uk> pcg@thor.cs.aber.ac.uk (Piercarlo Grandi) replies: > Most paging algorithms implement some approximation of LRU; LRU is *guaranteed* > to trash on sequential access. Nit: _Freeing_ of disk buffers/pages/etc is typically LRU. However, many OS's (e.g. UNIX, VMS "clusters", some of the newer disk controller _hardware_) also read ahead, which benefits sequential reading algorithms. With the BSD file system's clustering by cylinder groups combined with write-cacheing, sequential writing also benefits. Thus, while sequential access is slower than no access at all, it's often _much_ faster than random access. (HINT: e.g. typical lisp memory allocation) If you want to see what _really_ slows GNU emacs, type "C-h a <newline>" and write your reply (in another window) while you wait for a response. BTW, I'm leaning toward the buffer gap approach since reading Jonathan Payne's lucid explanation in comp.editors and comp.emacs. If you haven't read it, read it. #include <std/disclaimer.h> -- Chuck Phillips -- chuckp%bach.ncr-fc.FtCollins.NCR.COM