[comp.arch] Solid State Secondary Storage

young@vlsi.ll.mit.edu (George Young) (01/07/89)

Our wafer scale integration group is considering developing a new kind
of computer memory unit -- something we hope might fill in the present
gap in memory speed and price between magnetic disk and ram.

Suppose you had a unit composed of a stack of silicon wafers, maybe ten or
twenty of them.

Each wafer would contain maybe 20 to 40 megabytes of word addressable
32bit (or 64bit?) slow but very dense dynamic ram.

Each wafer should add only ~$350 to the manufacturing cost of the whole unit.

Sound like heaven?  Well, the catch is speed.  In order to get very high
density and cheap fabrication, various sacrifices are made resulting in 
access time of maybe ~10 microseconds.  So we are left with a box that is:

	capacity of a few hundred megabytes,
	word addressable,
	much faster access than disk,
	much slower than ram,
	and around the same price as disk.

It also should be smaller, lighter, and more rugged than disk.

The Question Is:  What's it good for?  How might it be integrated into 
existing computer (or other) systems?  What new systems or applications
would it make feasible?
What if we could put a little matching circuit on each wafer to support
context addressable usage?  What other sort of additional (small) circuits
would be useful in such a beast?

What do people use the current (small & expensive) ram-disks for now?

Comments, raves, dreams, musings welcome by e-mail. 
Please, this is only a PROPOSED project, no real specs exist yet, so don't
ask where you can buy one :-).

George Young,  Rm. B-141		young@ll-vlsi.arpa
MIT Lincoln Laboratory			young@vlsi.ll.mit.edu
244 Wood St.				[10.1.0.10]
Lexington, Massachusetts  02173		(617) 981-2756
-- 
George Young,  Rm. B-141		young@ll-vlsi.arpa
MIT Lincoln Laboratory			young@vlsi.ll.mit.edu
244 Wood St.
Lexington, Massachusetts 02173		(617) 981-2756

slackey@bbn.com (Stan Lackey) (01/07/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present
>gap in memory speed and price between magnetic disk and ram.
>Each wafer should add only ~$350 to the manufacturing cost of the whole unit.
>access time of maybe ~10 microseconds.  So we are left with a box that is:
>It also should be smaller, lighter, and more rugged than disk.
>The Question Is:  What's it good for?

My first reaction is that it would be good as a superfast paging device
for diskless workstations.  Right now, when you put a 5Meg application
on a 4Meg workstation, your paging (and the paging of the rest of the
users using the same application) can really clog the net, interfering
with all file system accesses.  A hundred-megabyte ramdisk between your 
micro and your ethernet port, if the cost is significantly less than just
RAM, seems like it could really improve perceived performance and reduce
network contention a LOT.
-Stan

dbs@think.COM (David B. Serafini) (01/07/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>
>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present
>gap in memory speed and price between magnetic disk and ram.
>
>The Question Is:  What's it good for?  How might it be integrated into 
>existing computer (or other) systems?  What new systems or applications
>would it make feasible?

Well, Cray has made a lot of money selling Solid State Disk (SSD)
which are nothing but huge ramdisks.  They're useful for solving
really large problems that have locality of reference or block
algorithms.  Using double-buffered asynch i/o you can make the
transfer time overlap useful computation.  It can also help the
operating system keep lots of processes alive by allowing for fast
swapping.  More generally, I think this could be used to support
virtual memory quite well.  I don't know if anyone has ever tried
this on a Cray.  A segmented OS like Multics might really benefit 
from this capability.

>What if we could put a little matching circuit on each wafer to support
>context addressable usage?  What other sort of additional (small) circuits
>would be useful in such a beast?

Virtual memory could be supported by hash tables or CAM's.  The
Multics dynamic linking used special hardware in the GE640 (?) to
make deref'ing the links fast. A table of these might help.

>What do people use the current (small & expensive) ram-disks for now?

Scientific problems of all kinds need lots of memory.  I know the
computational chemists at NASA/Ames can consume over a 100MW
(800MB) of core/SSD per job.  NEC and IBM also provide multi-GB
extended memory on their biggests machines.  Sorting very large
datasets would really win big.  Maybe putting comparison hardware
on chip would be useful.

>George Young,  Rm. B-141		young@ll-vlsi.arpa
>MIT Lincoln Laboratory			young@vlsi.ll.mit.edu
>244 Wood St.
>Lexington, Massachusetts 02173		(617) 981-2756

*************************************************************
-David B. Serafini				dbs@Think.Com
Thinking Machines Corp.
Mathematical and Computational Sciences
245 First Street		"We're building a
Cambridge, MA USA 02142-1214	    machine that will
(617)876-1111 ext. 253			be proud of us."

brooks@vette.llnl.gov (Eugene Brooks) (01/08/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>
>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present
>gap in memory speed and price between magnetic disk and ram.
>
>The Question Is:  What's it good for?  How might it be integrated into 
>existing computer (or other) systems?  What new systems or applications
>would it make feasible?
Speaking as a user of both Cray and micro based machines, a SSD unit which
fills in the gap between main memory and disk performance is very useful.
On a Cray where the processor performance is so high compared to the disk
I/O rates, the SSD is REQUIRED for large programs that do not fit in main
memory (of course we would rather have more main memory).  When main memory
is made of fast static ram it can dominate the cost of the computer and you
can't afford too much of it.  We use the SSD on the Cray for problems involving
a matrix in every zone and problems involving Monte Carlo particle lists.
Normal disk I/O rates, say the 250 megabytes per second that you can squeeze
out of an Eagle, are not sufficient for anything more than a VAX 11/780 class
machine on these problems.

Coding to use a RamDisk is expensive in manpower, so people only resort to it
when they can't afford more main memory.  The high cost of fast static ram makes
much larger main memories on the Cray XMP series prohibitive, hence the SSD and
its use.  For systems using cheap high density dynamic ram people just buy more
main memory and would not be willing to use a RamDisk due to the coding cost.

Whether or not RamDisks will become popular on the future RISC processors
depends on the effectiveness of cache systems.  A large cache with suitably
large cache line sizes can be thought of as the "main memory" and the ram which
you plug into the bus can be thought of as a RamDisk.  For a single cpu this
poses no real problem as making cache lines large, particularly for problems
which have the access patterns which could use a RamDisk, does not have any
serious drawbacks.  For a shared memory multiprocessor very large cache lines
can have a negative impact if processors are contending for ownership of the
cache lines.  As each processor attempts write access to a cache line it must
own it and two processors writing every other word in the line would cause the
entire cache line to flit back and forth on the bus for each write.  In this
case a RamDisk might be competitive, you would drop the latency of main memory
and decrease the cache line size do improve performance for line "flitting"
between processors.  Explicit RamDisk operations for the "large block" transfers
might be usable.  Of course, a hierarchial coherent cache scheme which uses a
single cache for each cpu with a small line size, each hooked to a much larger
shared cache with a larger line size, finally hooked to highly interleaved but
slow dynamic ram might be a better solution (allowing you to live without the
RamDisk and coding effort to use it).

bpendlet@esunix.UUCP (Bob Pendleton) (01/09/89)

From article <248@vlsi.ll.mit.edu>, by young@vlsi.ll.mit.edu (George Young):
> 
> Our wafer scale integration group is considering developing a new kind
> of computer memory unit -- something we hope might fill in the present
> gap in memory speed and price between magnetic disk and ram.

...

> Sound like heaven?  Well, the catch is speed.  In order to get very high
> density and cheap fabrication, various sacrifices are made resulting in 
> access time of maybe ~10 microseconds.  So we are left with a box that is:

So you do away with seeks and rotational latency, but give a transfer
rate about the same, or maybe a bit slower, than a high performance
disk? Very nice.


> What if we could put a little matching circuit on each wafer to support
> context addressable usage?  What other sort of additional (small) circuits
> would be useful in such a beast?

Take a look at Lee Hollars' work on large textual database search.
Build these ram disks with a few pattern matchers per wafer, stack up
50 to 100 gigabytes worth, and you could have a killer text search
machine.

			Bob P.
-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

		Reality is what you make of it.

lamaster@ames.arc.nasa.gov (Hugh LaMaster) (01/10/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:

>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present

>The Question Is:  What's it good for?  How might it be integrated into 

Well, I think what you describe is exactly the use of the Cray SSD as a
disk buffer cache; CDC has had various versions of "ECM", some of which
had such limitations, for ~20 years or so, and many companies use RAM
in disk controllers as a cache- so, as long as the access time is
significantly shorter than disk access time, it should work as a disk
read buffer in various configurations.

-- 
  Hugh LaMaster, m/s 233-9,  UUCP ames!lamaster
  NASA Ames Research Center  ARPA lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035     
  Phone:  (415)694-6117       

rcd@ico.ISC.COM (Dick Dunn) (01/11/89)

In article <248@vlsi.ll.mit.edu>, young@vlsi.ll.mit.edu (George Young) writes:
> Our wafer scale integration group is considering developing a new kind
> of computer memory unit -- something we hope might fill in the present
> gap in memory speed and price between magnetic disk and ram.
...
> ...So we are left with a box that is:
> 	capacity of a few hundred megabytes,
> 	word addressable,
> 	much faster access than disk,
> 	much slower than ram,
> 	and around the same price as disk.

One characteristic you didn't mention is whether it's a volatile memory.
I assume you're not planning a battery backup, so it must be volatile. 
That makes a major difference in what it can be used for.

>...The Question Is:  What's it good for?...

Some time ago (late '70's), Storage Technology (then sTc, now StorageTek)
made an animal called a Solid State Disk.  The SSD was built in an era when
good fast RAM was a lot more expensive, but they were able to use
relatively less expensive RAM - a lot of it - and build something suitable
for paging space on IBM mainframes.  It was built to mimic the interface of
some particular small/fast disk (or drum? I forget).  One key point is that
if you use it for paging space it doesn't matter that it's a volatile
memory system.

Another old old idea to look at is the Extended Core Storage on the CDC
6x00s.  ECS was slow core memory (cycle time more than 3x main memory) but
it made up for this by a very wide data path (480 bits) to get an effective
transfer rate of 600 Mbit/sec.

If you can pull tricks to get the data paths wide enough that you get a
very high transfer rate, it could be an interesting product.  However, disk
arrays may offer you some competition in the higher range of storage
capacity, and memory sizes are creeping up from below...I wonder just how
the disk<->memory gap you're aiming at will look a year or two from now.
(That's not necessarily to say that it's closing, but it is certainly
moving.)
-- 
Dick Dunn      UUCP: {ncar,nbires}!ico!rcd           (303)449-2870
   ...Worst-case analysis must never begin with "No one would ever want..."

kolding@june.cs.washington.edu (Eric Koldinger) (01/11/89)

In article <13487@ico.ISC.COM> rcd@ico.ISC.COM (Dick Dunn) writes:
>Some time ago (late '70's), Storage Technology (then sTc, now StorageTek)
>made an animal called a Solid State Disk.  The SSD was built in an era when
>good fast RAM was a lot more expensive, but they were able to use
>relatively less expensive RAM - a lot of it - and build something suitable
>for paging space on IBM mainframes.  It was built to mimic the interface of
>some particular small/fast disk (or drum? I forget).  One key point is that
>if you use it for paging space it doesn't matter that it's a volatile
>memory system.

Several companies still make similar beasts.  At a company I used to work for
we had a copy of the main index for our database, plus some of the most
accessed files stored on one.  Sped up database accesses alot.  Plus, since the
main index wasn't changed too often, it didn't hurt to keep it in volatile
memory, with a second copy out on disk someplace.

-- 
	_   /|				Eric Koldinger
	\`o_O'				University of Washington
  	  ( )     "Gag Ack Barf"	Department of Computer Science
       	   U				kolding@cs.washington.edu

darin@nova.laic.uucp (Darin Johnson) (01/11/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>So we are left with a box that is:
>
>	capacity of a few hundred megabytes,
>	word addressable,
>	much faster access than disk,
>	much slower than ram,
>	and around the same price as disk.
>
>It also should be smaller, lighter, and more rugged than disk.
>The Question Is:  What's it good for?  How might it be integrated into 
>existing computer (or other) systems?  What new systems or applications
>would it make feasible?

It would make a very nice paging device.  Since decent paging devices
are relatively expensive (we're talking fast disks, not SCSI or
ST225's), this would be a nice alternative.  Think about something like
common LISP on a personal computer.  Currently, most do not have paging,
and if they did have paging it would be to a slow disk.  A device like
you described would vastly improve the performance.  Also, it would
go over very nice with diskless workstations (even some disk-full
ones, since we're always short on disk space).

Darin Johnson (leadsv!laic!darin@pyramid.pyramid.com)
	"You can't fight in here! This is the war room.."

rk@lexicon.UUCP (Bob Kukura) (01/12/89)

In article <13487@ico.ISC.COM> rcd@ico.ISC.COM (Dick Dunn) writes:

   In article <248@vlsi.ll.mit.edu>, young@vlsi.ll.mit.edu (George Young) writes:
   > Our wafer scale integration group is considering developing a new kind
   > of computer memory unit -- something we hope might fill in the present
   > gap in memory speed and price between magnetic disk and ram.
   ...
   > ...So we are left with a box that is:
   > 	capacity of a few hundred megabytes,
   > 	word addressable,
   > 	much faster access than disk,
   > 	much slower than ram,
   > 	and around the same price as disk.

   [stuff deleted]

   If you can pull tricks to get the data paths wide enough that you get a
   very high transfer rate, it could be an interesting product.  However, disk
   arrays may offer you some competition in the higher range of storage
   capacity, and memory sizes are creeping up from below...I wonder just how
   the disk<->memory gap you're aiming at will look a year or two from now.
   (That's not necessarily to say that it's closing, but it is certainly
   moving.)


Most of this discussion has proposed using this device as either a
fast swap device or as a fast replacement for (or cache for) disk
storage.  I would like to see this discussion explore other possible
applications and alternative memory hierarchies.

Since the access time for this device is much faster than the seek
time of a disk, and its transfer rate, storage capacity, and price are
similar to those of a disk, the need for RAM in certain memory
hierarchies might be eliminated completely.

One such case is what I think George Young was getting at - if the
data path between a processor cache and this storage device was wide
enough, this device might replace the RAM and the paging disk in
virtual memory systems.  This would provide consistent access times to
the entire virtual address space and would eliminate thrashing when a
middle layer in the memory hierarchy becomes full.  With this kind of
hierarchy, it might make sense for the processor to maintain two
internal contexts to switch between when a cache fault occurs.

Another case where the RAM in the memory hierarchy might be eliminated
is in real-time systems, such as the digital audio editing systems
that we make at Lexicon.  In order to record, edit, and play back
audio from disk storage devices, we need lots of RAM to buffer data
during the seek times of the disks.  The solid state storage device,
with its fast seek time, might be able to replace both the RAM and the
disk, since the data in RAM is usually accessed only once.

-- 
-Bob Kukura		uucp: {husc6,linus,harvard,bbn}!spdcc!lexicon!rk
			phone: (617) 891-6790

zaphod@madnix.UUCP (Ron Bean) (01/12/89)

   Wasn't Clive Sinclair working on something like this a few
years ago? It was to be a wafer-scale RAM device, with serial
communications paths between blocks of RAM. You'd map out the bad
parts and store the results in an EPROM, which could support
several wafers. There was an article in BYTE about it a couple of
years ago. I thought it would be great for virtual memory. Anyone
know if he's still working on it?

mcwill@inmos.co.uk (Iain McWilliams) (01/12/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>
>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present
>gap in memory speed and price between magnetic disk and ram.
>
>Suppose you had a unit composed of a stack of silicon wafers, maybe ten or
>twenty of them.

[Munch Munch Munch ... Brpppp !!!] 

>Comments, raves, dreams, musings welcome by e-mail. 
>Please, this is only a PROPOSED project, no real specs exist yet, so don't
>ask where you can buy one :-).
>
>George Young,  Rm. B-141		young@ll-vlsi.arpa
>MIT Lincoln Laboratory			young@vlsi.ll.mit.edu
>244 Wood St.				[10.1.0.10]
>Lexington, Massachusetts  02173		(617) 981-2756

I remember reading about wafer scale integration as much as four or five
years ago. Then, the main problems to be solved were coping with flaws
on the wafer. This is because four flaws on a wafer of 'normal' RAM
chips could decrease your yield from 100 to 96 chips. ( Not that much of
a problem )

However if you have flaws on a WSI there are two possible circuits that
could be effected.

i)	A RAM chip/module. 
	This wouldn't cause that much grief as your control circuitry
should be able to mark that module as bad and still function normally.
Something similar to a disk o/s marking blocks as bad in the FAT.

ii)	The control circuitry.
	This is the biggie, if the control/data paths are hit then
unless you have backup control circuits, you could loose a large
percentage of the storage capacity on the wafer. Again it can be
compared with a disk, getting a bad block within the FAT.

As I said I remember reading about these problems four or five years
ago. So I don't even know if these are still regarded as the main
difficulties in building WSI Memory Modules. However, if they are it
would be interesting to hear how you plan on overcoming them.

-- 
Iain McWilliams     Inmos Ltd, 1000 Aztec West, Almondsbury, Bristol, BS12 4SQ
------------------------------------------------------------------------------
The opinions above are my personal views and do  | 
         not refelect Inmos policy.              |    mcwill@inmos.co.uk

karl@ficc.uu.net (karl lehenbauer) (01/13/89)

In article <408@laic.UUCP>, darin@nova.laic.uucp (Darin Johnson) writes:
> It would make a very nice paging device.  Since decent paging devices
> are relatively expensive (we're talking fast disks, not SCSI or
> ST225's), this would be a nice alternative.  Think about something like
> common LISP on a personal computer.  Currently, most do not have paging,
> and if they did have paging it would be to a slow disk.  A device like
> you described would vastly improve the performance.

Unless you already had as much directly addressed memory as your bus could
support, it would always be a win under a VM system to add RAM as
bus memory rather than as a fast disk for paging use, because you'll still
take page faults to get data from your RAM disk but you won't for the same
data in directly addressed RAM.  

Also, support for mapped files is starting to show up (Mach, etc), the
result being that your files logically appear in your address space and
data is loaded from page faults.  This makes having lots of directly
addressed RAM all the more desirable.

RAMdisks have themselves been around for a long time.  I had one on my Apple
II.  Better, I think, is to use the RAM (directly addressed or bank switched)
as a cache.  That way you don't have to make the decisions of what to copy
into RAM disk nor do you have to remember to copy it out after it's been
updated.  DEC used to sell a PDP-11 RAM disk a long time ago called EM-11,
"non-rotating mass storage."
-- 
-- uunet!ficc!karl	"The greatest dangers to liberty lurk in insidious
-- karl@ficc.uu.net	encroachment by men of zeal, well-meaning but without 
			understanding." -- Justice Louis O. Brandeis

andrew@frip.gwd.tek.com (Andrew Klossner) (01/13/89)

> ...So we are left with a box that is:
> 	capacity of a few hundred megabytes,
> 	word addressable,
> 	much faster access than disk,
> 	much slower than ram,
> 	and around the same price as disk.

The twenty-plus responses I've seen discuss this as though it were a
typical RAMdisk.  It seems to me that they all miss the point, which is
that the proposed device would cost about the same (per byte) as a
disk.  If this puppy were realized, we'd buy them by the thousands,
hook up battery backup, and use them as workstation main storage in
place of our conventional disks.

  -=- Andrew Klossner   (uunet!tektronix!hammer!frip!andrew)    [UUCP]
                        (andrew%frip.gwd.tek.com@relay.cs.net)  [ARPA]

mat@uts.amdahl.com (Mike Taylor) (01/13/89)

In article <379@madnix.UUCP>, zaphod@madnix.UUCP (Ron Bean) writes:
> 
>    Wasn't Clive Sinclair working on something like this a few
> years ago? It was to be a wafer-scale RAM device, with serial
> communications paths between blocks of RAM. You'd map out the bad
> parts and store the results in an EPROM, which could support
> several wafers. There was an article in BYTE about it a couple of
> years ago. I thought it would be great for virtual memory. Anyone
> know if he's still working on it?

Not personally, but the company he started to do this is still in
business. It's called Anamartic.
-- 
Mike Taylor                               ...!{hplabs,amdcad,sun}!amdahl!mat

[ This may not reflect my opinion, let alone anyone else's.  ]

khb%chiba@Sun.COM (Keith Bierman - Sun Tactical Engineering) (01/14/89)

In article <2710@ficc.uu.net> karl@ficc.uu.net (karl lehenbauer) writes:
>In article <408@laic.UUCP>, darin@nova.laic.uucp (Darin Johnson) writes:
>> It would make a very nice paging device.  Since decent paging devices
>> are relatively expensive (we're talking fast disks, not SCSI or
>> ST225's), this would be a nice alternative.  Think about something like
>> common LISP on a personal computer.  Currently, most do not have paging,
>> and if they did have paging it would be to a slow disk.  A device like
>> you described would vastly improve the performance.
>
>Unless you already had as much directly addressed memory as your bus could
>support, it would always be a win under a VM system to add RAM as
>bus memory rather than as a fast disk for paging use, because you'll still
>take page faults to get data from your RAM disk but you won't for the same
>data in directly addressed RAM.  

The question is cost. The orginal posting made it clear that this new
RAM would be slower, but much cheaper. Without major modifications to
the VM algorithms putting 1-3order of magnitude slower memory on the
system would produce a major slowdown.

>
>Also, support for mapped files is starting to show up (Mach, etc), the
>result being that your files logically appear in your address space and
>data is loaded from page faults.  This makes having lots of directly
>addressed RAM all the more desirable.

Cost and size are still constraints. Very high performance systems
have many levels of memory (multi-level cache, high speed ram, slower
ram (like the item under discussion) disks of various speeds, tapes,
etc. Look at very large Amdahl's or IBM's or Cray's or Fujuitsu's to
see how this works. 

For very low performance CPU's (vis a vis the memory system) one uses
one set of design considerations. For very fast CPU's (say, for
example, a 4nsec GaS RISC) multiple levels of memory becomes key to
performance design.
Keith H. Bierman
It's Not My Fault ---- I Voted for Bill & Opus

daveb@geaclib.UUCP (David Collier-Brown) (01/14/89)

From article <248@vlsi.ll.mit.edu>, by young@vlsi.ll.mit.edu (George Young):
> What if we could put a little matching circuit on each wafer to support
> context addressable usage?  What other sort of additional (small) circuits
> would be useful in such a beast?

Bob Pendleton in  <1178@esunix.UUCP>:
 Take a look at Lee Hollars' work on large textual database search.
 Build these ram disks with a few pattern matchers per wafer, stack up
 50 to 100 gigabytes worth, and you could have a killer text search
 machine.

 Or:
    collect various suggested add-ons
    find the points of commonality
    design "hooks" so that one can add simple logic on an adjacent
`chip (say, gate-arrays) at the expense of making that logic
critically (the usual term is "fatally") intertwined with the memory...
    You then have a kit for anyone who wants to design a fast-search
machine.  You might want to release one or two examples of useful
configurations: I'd really like a search chip, but i suspect that
about the time Little-Tiny-Chip-Foundry Inc. comes out with one my cpu
will search almost as fast...

--dave (with apologies to Henry Spencer) c-b
-- 
 David Collier-Brown.  | yunexus!lethe!dave
 Interleaf Canada Inc. |
 1550 Enterprise Rd.   | He's so smart he's dumb.
 Mississauga, Ontario  |       --Joyce C-B

daveb@geaclib.UUCP (David Collier-Brown) (01/20/89)

In article <248@vlsi.ll.mit.edu> young@vlsi.ll.mit.edu (George Young) writes:
>Our wafer scale integration group is considering developing a new kind
>of computer memory unit -- something we hope might fill in the present
>gap in memory speed and price between magnetic disk and ram.

 Oops.  I missed the obvious: a device between magnetic disk and
ram. (Mea culpa, mea culpa, mea maxima culpa).

  Seriously, though, how about a kit for disk-manufacturers to add
to their disk-resident electronics to cache significant amounts of
data (tracks, say) to be read from or to be written to the disk. If
it looked rather like a disk insofar as its addressing was
concerned, one could make the task of arranging the memory<->disk
transfers easy at the hardware level.
  And the performance improvement might allow us to make
write-through instead of sync-eventually the Unix default... Which
would make people with performance-oriented applications very happy.
(Geac is/was a performance-oriented company, even when I was there
working on Eunuchs). 
  Of course, you also need some short-term power backup, but that's
not tooooo hard.

--dave

-- 
 David Collier-Brown.  | yunexus!lethe!dave
 Interleaf Canada Inc. |
 1550 Enterprise Rd.   | He's so smart he's dumb.
 Mississauga, Ontario  |       --Joyce C-B