darrell@sdcsvax.UUCP (03/26/87)
In article <2859@sdcsvax.UCSD.EDU> johnl@ima.ISC.COM (John R. Levine) writes: >In article <2850@sdcsvax.UCSD.EDU> bagwill@decuac.DEC.COM (Bob Bagwill) writes: >> >> ... Why not define all storage as extended memory structures >>that may be loaded as needed. ... > > ... it makes device independence very hard. ... > >The other reason that you want files is that they have an existence and a >naming structure separate from the programs that manipulate them, which is >kind of unavoidable. You as a human would probably rather call a file >/usr/fred/games/source/pinball_bounce.c, while your program would rather >have a nice handle like a memory address of 0x18723d70 or a file handle of 5. I'd like to continue this defence for the file concept and to turn the question the other way: Why not turn all memory into files? The structured UNIX file tree is a very comfortable means for handling big, non-uniform amounts of data. Much more comfortable than a uniform, binary address space from zero to infinity. Even an address space from zero to 2^16 (as on this PDP-11/70 called Obelix) seems too big and unstructured to me. Why not continue the partitioning of data into files and directories down to the variable/record/statement level? Of course, we would then have to consider interpreting languages with more overhead than todays optimized number-crunchers, but, as computers as such become faster and smaller, we would have to find a way to use their extra power anyhow :-). I think this idea has been used to implement some virtual LISP and SmallTalk machines, but these aren't too common. To the comp.arch people: Who will invent the tree-structured memory chip? -- Name: Lars Aronsson Snail: Rydsvagen 256 A:10, S-582 48 Linkoping, Sweden UUCP: {mcvax,seismo}!enea!liuida!obelix!l-aron ARPA: l-aron@obelix.ida.liu.se