pcg@uunet.UU.NET (02/28/89)
While IBM claimed in advertisements that they invented virtual memory (gasp), the first virtual memory machine was the Atlas from Manchester University. It did have an inverted page table scheme, with a single (small, 20 bit) address space. There are several papers that describe that. Manchester people took up again inverted page tables for their 64 bit MU6 mini supercomputer, and there are some papers and thesises on that as well. If you want any information not in published papers, I am sure that Dr. A. Knowles or Dr. V. Woods of the CS Dept., U. of Manchester, Oxford Rd., Manchester, GB, will be able to help. I have several years ago written a paper on the ins and outs of inverted page tables, and especially on a fairly original inverted page table design I made, especially suitable for high performance microprocessors, and its implications; I have been unable to find the time to revise it for publication, though. Also, I have been very wary of publishing it, because apparently IBM has "invented" and patented inverted page tables just a few years ago (ROMP and S/38), and I do not want to run into troubles with their claims and their well funded lawyers [IBM could patent *you*, and then make you find the money to prove in court that you do not belong to them]. In other words, all research work on inverted page tables, from Atlas (1965) onwards, may elicit IBM's attentions. Be Warned! Inverted page tables are extremely good for operating systems, as they are very much more efficient than straight ones, especially for large sparse address spaces. It is somewhat laborious to implement shared memory with them, but then of course an operating system need not rely on or support shared memory (or messsage passing), like MUSS. On the other hand the really good use for inverted page tables is for single address space machines with capability (and not multiple address space) based protection. An inverted page table MMU can be built out of a few Kbytes of RAM and of ROM, very easily and cheaply (more so than a traditional one, I guess); if you have associative RAM, that can only help, and then the ROM for the hashing becomes unnecessary.
ken@gatech.edu (Ken Seefried iii) (03/01/89)
In article <6527@saturn.ucsc.edu> mcvax!cs.aber.ac.uk!pcg@uunet.UU.NET writes: > >Inverted page tables are extremely good for operating systems, as they are >very much more efficient than straight ones, especially for large sparse >address spaces. It is somewhat laborious to implement shared memory with >them, but then of course an operating system need not rely on or support >shared memory (or messsage passing), like MUSS. On the other hand the >really good use for inverted page tables is for single address space >machines with capability (and not multiple address space) based protection. > The MACH papers discuss some of the difficulties that they had porting that operating system to the PC/RT because of the awkward implemetation of shared memory on such an MMU. I do not recall right now how they solved the problem. Perhaps someone at CMU could elaborate. Too bad the evil empire (IBM) slapped yet another copyright on this idea. It would, however, be interesting to hear how BigBlue does System V shared memory on the ROMP MMU. ...ken seefried iii ken@gatech.edu
mdb@ucscc.UCSC.EDU (Mark Brukhartz) (03/04/89)
> The MACH papers discuss some of the difficulties that they had porting > that operating system to the PC/RT because of the awkward > implemetation of shared memory on such an MMU. I do not recall right > now how they solved the problem. Perhaps someone at CMU could > elaborate. > > Too bad the evil empire (IBM) slapped yet another copyright on this > idea. It would, however, be interesting to hear how BigBlue does > System V shared memory on the ROMP MMU. > > ...ken seefried iii > ken@gatech.edu The ETA-10 supercomputer features an associative page table. Its memory management is based upon the CDC Cyber 205, which has been around for a while now. The software difficulties of associative page tables appear to be similar to those of inverted page tables. Each associative page descriptor contains a virtual address and a physical address. The hardware sequentially searches the page table by virtual address, then moves the matching entry to the beginning of the table. The first sixteen page descriptors are cached in a fast associative store. The ETA-10 System V Release 3.0 memory management implementation builds an associative page table at the beginning of a time slice from the System V region structures. At the end of the time slice, it updates the region structure reference and modify bits from the associative table. This is done in associative table order, since the table is reordered during execution; the corresponding region entries can be found by a nearly direct search. Multiple regions could perhaps be mapped simultaneously with the machine's lock-and-key mechanism. That might improve context switch time at the expense of page fault handling time. Are there other UNIX implementations on machines with unusual page table organizations? Mark Brukhartz Lachman Associates, Inc. ..!{amdahl, masscomp, nucsrl, sun}!laidbak!mdb
rpd@cs.cmu.edu (Richard Draves) (03/04/89)
> *Excerpts from ext.nn.comp.os.research: 1-Mar-89 Re: Inverted page tables Ken* > *Seefried iii@gatech. (1091)* > The MACH papers discuss some of the difficulties that they had porting > that operating system to the PC/RT because of the awkward > implemetation of shared memory on such an MMU. I do not recall right > now how they solved the problem. Perhaps someone at CMU could > elaborate. Suppose a page is mapped into several tasks. Only one of the tasks actually has access to the page. If another task attempts to access the page, it faults and the fault handler moves the page (no IO). The end result is that context-switches between two tasks running the same program (sharing text) are more expensive than you might expect. The RT pmap code (machine dependent VM code) could theoretically notice what's happening in some common situations, like shared text, and use a shared segment to avoid these faults, but it'd be a tad awkward to implement. Rich