[comp.arch] Extremely Large Files

cliffc@libya.rice.edu (Cliff Click) (08/09/90)

I think the large file folks don't need more address bits, they need a
better addressing scheme.  The use of infinite-precision integers in the
applications will let them write code that can handle any size file.  
Then it's the OS folks' job to understand seek(1 Trillion), and translate
that to a disk sector, or read/write/load that between memory and disk.

If you want to say "malloc(1 Trillion)" that's a slightly different problem;
here your requesting more virtual memory than you have.  This is a language
implementors' problem:  you *can* deal with a name address space larger
than you physical virtual-address-space (more than 32 bits), it's just slower
and requires more smarts.  Object oriented folks have a leg up here, and IBM
PC folks have had to fight this fight some time ago.

Cliff Click
-- 
Cliff Click                
cliffc@owlnet.rice.edu