bph@buengc.BU.EDU (Blair P. Houghton) (06/12/89)
In article <10393@smoke.BRL.MIL> gwyn@brl.arpa (Doug Gwyn) writes: >In article <381@biophys.UUCP> ruba@biophys.UUCP (Rudolf Baumann) writes: >>I would like a clear description about the difference of paging and >>swapping and which of both has more influence on the performance of >>a system. > >There are a lot of variations, but the basic distinction is that >an entire process space is swapped out, while only a segment of a >process is paged out to backing store. Other things being equal >(which they usually aren't), paging evidently involves less work >for the operating system and consequently should be more efficient. > >In actual practice, if the job mix demands much more main memory >than is actually available, simplistic implementations of both >swapping and paging are likely to lead to severe thrashing and >consequent overall system inefficiency. Additional cleverness is >required in the system memory allocation and scheduling algorithms >to overcome this tendency. > >It's a bit strange that you couldn't find a discussion of this >in an operating system design book; I would have thought that it's >a standard topic. Sim-sala-bim... POOF! Edited from the Encyclopedia of Computer Science, A. Ralston, et al, eds., Van Nostrand Reinhold ( though the copyright is listed as belonging to Litton Educational Publishing Inc... ?:-| ), 1976. [ it's kind of funny, since it makes absolutely no reference to Unix or C... :-) (which, of course, came two years later to those outside the dungeons of A. T. & T.]: Paging, Page Fault, Demand Paging, and Overlaying: "Items are fetched up into main store [from disk or cache] as required, a page at a time. [There are about 1K words to the page...] "When a program references a nonresident datum or instruction [i.e., one not in main RAM] a 'page fault' is said to occur and an interruption is raised, giving control of the processor to a portion of the operating system which then handles the [paging] via software. This technique of loading pages only as they are requested is called 'demand paging.' When loadings are specified in advance of reference to them, the technique is best called 'overlaying.'" -C. C. Foster Swapping: "When the operating system locates a program in a 'waiting' state --i.e., one that can no longer use the CPU normally because it is waiting to complete an I/O operation-- it 'swaps' the program [from primary storage] to secondary storage, replacing it with another program that is ready to use the CPU... "Demand paging and segmentation allow programs to be only partially resident for execution. When reference to a nonresident page or segment is detected, the operating system 'swaps' in the needed page. Systems of this type were developed in the early 1960's [!!] on the Atlas machine in England and on the Burroughs 5000... These techniques, called 'demand paging', provide more efficiency by reducing the average size of a program resident in primary storage... The reduction in size was achieved at the expense of the paging I/O, which, if not properly controlled, could result in an excessive I/O condition known as 'thrashing'..." -G. E. Bryan Bryan also describes the earliest swapping strategies for multiprocess timesharing systems as involving 500ms timeouts balanced with 500ms transfer times to provide 50% CPU efficiency, and complete replacement of primary memory on each swap, ostensibly because the code was not relocatable. The inherent inefficiency in this method is that not all processes required the full 500ms. Working Set, Thrashing: "...[the 'working set' is] the minimum set of instructions and data needed in main memory for efficient processing... "If memory is overcommitted, then at least one task will not have its working set...present, and therefore the task will be unable to operate efficiently. If, in addition, the memory management policy attempts to acquire space for the pages requested by the inefficient task by preempting them from the working sets of other tasks, then these deprived tasks may also find their working sets incomplete in memory, whereupon they, too, may enter an inefficient mode of operation. This can spread rapidly among all active tasks, so that the net effect is that no task at all can operate efficiently, a condition called 'thrashing'... -P. J. Denning and D. E. Denning I think what they mean is that it's no good for _every_ process to have to page itself in _every_ time control is switched to that process. No etymology is provided for the term "thrashing," but I'd be willing to bet money that it comes from the appearance of those drum-heads' thrashing about the drum every tenth-second or so, looking for a page to swap in because the next process' working set is nonresident. --Blair "But then, I bet the farm on Sunday Silence... :-( Poor horsie."