jqj@cornell.UUCP (02/27/85)
From: jqj (J Q Johnson) Many large lisp systems already have address space requirements greater than 16 MB, even after garbage collecting. At Cornell, we configure one of our VAXes with a 70MB or so process limit, and find several of our research projects, ranging from lisp to fortran (a numerical analysis project in sparse [!] arrays) to graphics, using such large address spaces. Or consider this example from graphics: suppose I have a 1Kx1Kx24 frame buffer (which is typical of current technology). That's 3MB of address space (and of real memory someplace!) for each image. Some animation algorithms require smoothing over time, implying that you'd really like to be able to keep a dozen or so images in memory for comparison. Already you're up to 40MB of address space or so. Or consider a simulation of a complex 3-d physical system on a 1Kx1Kx1K grid, with, say, 64 bytes of data per intersection. If you do no compression that's 35 bits of address space; presumably a large class of problems can be compressed using sparse-matrix techniques by a factor of 100 (hence fitting into 32 bits including code), but also presumably a much smaller class of problems can be compressed by a factor of 10000. Even 32 bits is far too few for some applications, e.g. a capability-based architecture where every file the system ever touched was part of the address space; I can't currently think of an application where I'd need more than 64 bits of address, but that's only because I haven't tried very hard. So 32 bits is inadequate in principal, but seems like a nice number that will satisfy most traditional-design needs for the rest of the decade.