ham@Polya.Stanford.EDU (Peter R. Ham) (07/19/89)
Can anyone offer me an explanation why segmented addressing schemes like the one used by Multics way back when are not in vogue today? It seems that segments offer a lot in terms of controlled sharing for things like shared libraries which seem to be a hot topic again today. I'm also interested in tags in the architecture to make pointers visible at run-time. Are there any architectures out there that support a large number of segments or objects that can be relatively small (ie. much smaller than one's typical page size?). One explanation for the lack of success of segmentation may be the extra overhead of looking in a segment table on every memory reference. Anyone know any statistics on this? Couldn't this be overcome by some kind of caching mechanism like a large TLB? -- Peter Ham PO Box 3430 (h)(415) 324-9645 MS Computer Science Student Stanford, CA ham@polya.stanford.edu Stanford University 94309 (o)(415) 723-2513
segall@caip.rutgers.edu (Ed Segall) (07/21/89)
> Can anyone offer me an explanation why segmented addressing schemes like > the one used by Multics way back when are not in vogue today? Check out the HP Precision Architecture. --Ed -- uucp: {...}!rutgers!caip.rutgers.edu!segall arpa: segall@caip.rutgers.edu
bga@raspail.cdcnet.cdc.com (Bruce Albrecht) (07/25/89)
The Control Data Cyber 180 line follows the Multics model fairly closely. Each process may have 4095 segments of length 2**32. The segments may be shared by multiple processes. I/O is actually paged memory accesses. I don't know much about the hardware, but I don't think the segment validation causes that much of an overhead problem.