[comp.arch] disk striping

eugene@pioneer.arpa (Eugene Miya N.) (08/04/87)

This is a follow up (I got lots of letters, so I hope interest can be
stirred and more work done in this area (striping)).  I have changed
the order of Chuck's questions to do the simpler first.

>OK.  I'll bite.  And what classes are defined and what do they mean?

Supercomputers purchased by the US. Govt. were (are) rated for their
performance).  The rating is informal and unofficial (emphasis) done
for procurement purposes.  The work is done by the Dept. of Energy
(prior to that ERDA and prior to that the AEC).  The rating is arbitrary
and does not involve any official measurement tool.  What I say is my
understanding of how the rating works.

The rating was developed to Sid Fernbach and George Michael which the
two were are Lawrence Livermore Lab (before they became LLNL).  I have
seen charts on the wall at LLNL which detail some of this.
Supercomputers come in 6 "classes."  Each class should be a factor of 4
to 16 more "powerful" than the preceding class depending on who you talk
to.  Classes are defined more by existing general-purpose machines which
sit in a class.  A Class 6 machine is something of the power of a Cray 1
or Cyber 205, or any of the Japanese Machines.  Class 5 computers
included the ILLIAC IV, CDC 7600, class 4: CDC 6600, IBM 370/195.
There are discrepencies: the ILLIAC had an I/O bandwidth higher than
the Cray could ever have in the near term future.

Classes came about for the same reasons Berkeley Unix from 1.0, 2.0,
3.0, and 4.0 BSD: lawyers when ever new agreements or rules had to be
written, a new class or Distribution have to be negotiated. (e.g. what
would an agreement for a 5.0BSD look like?...shutter!)  Now: frequent
question: where is `my' machine (typically an Apple, VAX or SUN).  These
machines don't rate.  The definition of a supercomputer is relative, so
at any given time, those give machines don't rate, and a class is
closed.  Sid said a VAX could be a "Class 1/2."

Problems with classes: the most obvious problem is handling parallelism.
The MPP and the Connection machine are good cases which don't fit this
rating scheme.  This includes problems like I/O.  The new database
machines should also make some of this rating interesting

Personal note: when I first saw classes I was reminded rating climbs
(rock, etc.) which had early versions running from 1 to 6 (or I to VI)
[why not 1 to 5 or 1 to 10].  This is got me curious and I eventually
met George and Sid.  Also I know that lots of DOE/ex-AEC people are or
were climbers like E. Teller.  What is interesting is that climbing is
going thru a similar problem with their closed ended rating system
(breaking into 7s).  Sorry for the digression.  This is the second time
I have described class to the arch group.

>What is "disk stripping"?
>-- Chuck

Oh, you caught my typo!  Disk stripping is the process of cleaning the
surface of a platter before the magnetic material is deposited ;-).

I meant to say DISK STRIPING.  This is the distribution of data across
multiple "spindles" in order to 1) increase total bandwidth, 2) for
reasons of fault-tolerance (like Tandems), 3) other miscellaneous
reasons.

Very little work has been done on the subject yet a fair number of
companies have implemented it: Cray, CDC/ETA, the Japanese
manufacturers, Convex, and Pyramid (so I am informed), and I think
Tandem, etc.  Now for important perspective: It seems that striping over
3-4 disks like in a personal computer is a marginal proposition.
Striping over 40 disks, now there is some use.  The break even-point is
probably between 8-16 disks (excepting the fault tolerance case).  A
person I know at Amdahl boiled the problem down to 3600 RPM running on 60
HZ wall clock: mechanical bottlenecks of getting data into and out of a
CPU from a disk.  The work is not glamourous as making CPUs, yet is just
as difficult (consider the possibility of losing just one spindle).

The two most cited papers I have seen are:

%A Kenneth Salem
%A Hector Garcia-Molina
%T Disk Striping
%R TR 332
%I EE CS, Princeton Univerity
%C Princeton, NJ
%D December 1984

%A Miron Livny
%A Setrag Khoshafian
%A Haran Boral
%T Multi-Disk Management Algorithms
%R DB-146-85
%I MCC
%C Austin, TX
%D 1985

Both of these are pretty good reports, but more work needs to be done in
this area, hopefully, one or two readers might seriously.  The issue is
not simply one of sequentially writing bits out to sequentially lined
disks.  I just received:

%A Michelle Y. Kim
%A Asser N. Tantawi
%T Asynchonous Disk Interleaving
%R RC 12496 (#56190)
%I IBM TJ Watson Research Center
%C Yorktown Heights, NY
%D Feb. 1987

This looks good, but what is interesting it that it does not cite either
of the two above reports, but quite a few others (RP^3 and Ultracomputer
based).

Kim's PhD disseration is on synchronous disk interleaving and she has a
paper on IEEE TOC.

Another paper I have is Arvin Park's paper on IOStone, an IO benchmark.
Park is also at Princeton under Garcia-Molina (massive memory VAXen).
I have other papers, but these are the major ones, just starting
thinking Terabytes and Terabytes.  From a badge I got at ACM/SIGGRAPH:

	Disk Space: The Final Frontier

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

ram@shukra.UUCP (08/04/87)

In article <2432@ames.arpa>, eugene@pioneer.arpa (Eugene Miya N.) writes:
> >OK.  I'll bite.  And what classes are defined and what do they mean?
> 
> The rating was developed to Sid Fernbach and George Michael which the
> two were are Lawrence Livermore Lab (before they became LLNL).  I have
> seen charts on the wall at LLNL which detail some of this.
> Supercomputers come in 6 "classes."  Each class should be a factor of 4

    Seems like this grouping/classifying is rather rigid.  The performance
    groups are like sliding windows, which keep changing regularly.  Instead
    of inventing new classes (for better performance mechines)
    why not define as follows.  Determine the fastest machine (don't ask me
    how and assume we don't quarrel over numbers) - from that,
    all machines that reach within 80% of its performance say, will be
    the supercomputer class (otherwise we will run out of superlatives very
    soon).  The mini-super or super-mini (I don't understand which goes
    ahead of which) could take the next 20% of the performance slot and
    ulimately the eternal PC taking last 1-2%.  So, in year 2000, hopefully
    CRAY-1 should be in the PC slot and CRAY-2S in the workstation slot.(Room
    heater will become obsolete :-)) I guess this scheme has to be
    militaristically enforced on the marketing guys :-).
    
> 
> I meant to say DISK STRIPING.  This is the distribution of data across
> multiple "spindles" in order to 1) increase total bandwidth, 2) for
> reasons of fault-tolerance (like Tandems), 3) other miscellaneous
> reasons.
  
    So, my guess was right.  Does the CM use such a feature to obtain
    a large BW?  How come IBM's disk farms don't have these, or do they?
> 
> --eugene miya
>   NASA Ames Research Center

---------------------
   Renu Raman				ARPA:ram@sun.com
   Sun Microsystems			UUCP:{ucbvax,seismo,hplabs}!sun!ram
   M/S 5-40, 2500 Garcia Avenue,
   Mt. View,  CA 94043

haynes@ucscc.UCSC.EDU.ucsc.edu (99700000) (08/04/87)

Isn't this what IBM uses in their Airline Control Program?  There was that
article in CACM a while back about the TWA reservation system, and it said
something about spreading files over a large number of spindles for greater
throughput.

haynes@ucscc.ucsc.edu
haynes@ucscc.bitnet
..ucbvax!ucscc!haynes

devine@vianet.UUCP (Bob Devine) (08/05/87)

In article <2432@ames.arpa>, eugene@pioneer.arpa (Eugene Miya N.) writes:
> Very little work has been done on the subject yet a fair number of
> companies have implemented it: Cray, CDC/ETA, the Japanese
> manufacturers, Convex, and Pyramid (so I am informed), and I think
> Tandem, etc.  Now for important perspective: It seems that striping over
> 3-4 disks like in a personal computer is a marginal proposition.
> Striping over 40 disks, now there is some use.  The break even-point is
> probably between 8-16 disks (excepting the fault tolerance case).

  Hillis' Connection Machines supposedly[*] have an implementation of
disk striping where there are 39 disks that each get one bit from
a word.  The word is formed by 32 data bits and 7 ECC bits.  Doing it
in this fashion allows complete recovery even if one disk gets hosed.

Bob Devine

[*] I write 'supposedly' because I only saw a blurb about it.  I don't
    know of dates, availability or ordering info.

bejc@pyrltd.UUCP (Brian Clark) (08/05/87)

In article <2432@ames.arpa>, eugene@pioneer.arpa (Eugene Miya N.) writes:
  
> Very little work has been done on the subject yet a fair number of
> companies have implemented it: Cray, CDC/ETA, the Japanese
> manufacturers, Convex, and Pyramid (so I am informed), and I think
> Tandem, etc.  Now for important perspective: It seems that striping over
> 3-4 disks like in a personal computer is a marginal proposition.
> Striping over 40 disks, now there is some use.  The break even-point is
> probably between 8-16 disks (excepting the fault tolerance case).  A
> person I know at Amdahl boiled the problem down to 3600 RPM running on 60
> HZ wall clock: mechanical bottlenecks of getting data into and out of a
> CPU from a disk.  The work is not glamourous as making CPUs, yet is just
> as difficult (consider the possibility of losing just one spindle).

Pyramid has been shipping "striped disks" as part of OSx 4.0 since early this
year."Striped disk"  is one of 4 "virtual disk" types offered under OSx, the
others being "Conatenated","Mirrored" and "Simple". A full description of the
techniques and thier implementation were given by Tom Van Baak of Pyramid
Technology Corporation at the February Usenix/Uniforum  meeting in Washington.

The principle reason for using "striped disk" is performance. The ability to
place interleaved clusters of data on different spindles can be a winner in
cases where the disk throughput rate is approaching the satuaration point of a
single disk, and you have a disk controller intelligent enough to know where
every disk head is at any given time. To take a case in point, ICC a company
based in the City of London, supplies financial data from an 8Gbyte database to
dial up subscribers. One index in the database is >800Mbyte long and has been
set up on a "concatenated virtual disk" made up of two 415 Mbyte Eagles. When
the set up was switched to the "striped virtual disk" model a throughput
increase of 34% was measured. This doesn't mean that "striped" disks are going
to answer everbody's disk performance problems, but they can provide
significant improvements in certain cases.

Both Tom and myself have produced papers on Virtual Disks and would be happy to
answers any further questions that you have. Tom can be contacted at:
			
			pyramid!tvb

while my address is given below.



-- 
      -m-------  Brian E.J. Clark	Phone : +44 276 63474
    ---mmm-----  Pyramid Technology Ltd Fax   : +44 276 685189
  -----mmmmm---                         Telex : 859056 PYRUK G
-------mmmmmmm-  			UUCP  : <england>!pyrltd!bejc

lfernand@cc4.bbn.com.BBN.COM (Louis F. Fernandez) (08/07/87)

> 
> Isn't this what IBM uses in their Airline Control Program?  There was that
> article in CACM a while back about the TWA reservation system, and it said
> something about spreading files over a large number of spindles for greater
> throughput.
> 
> haynes@ucscc.ucsc.edu

What ACP does isn't what we are calling disk striping in this newsgroup.
ACP has an option to write each record to two different disks at the same
time.  This doesn't increase throughput but does has several benefits:

    1/ It creates a backup copy so the data will not be lost if one disk
       crashes.
    2/ It allows ACP a choice of where to read the data from.  ACP will
       read the data from the disk with the shortest queue, reducing the
       access delay.

BTW, airline reservations systems have quite high performance.  Large ones
(e.g., United and American) average over 1000 transactions per second,
where a transaction is a few characters of keyboard input, a half-dozen
disk access, and a few hundred characters of CRT output.  

...Lou

lfernandez@bbn.com
...!decwrl!bbncc4!lfernandez

rpw3@amdcad.AMD.COM (Rob Warnock) (08/09/87)

In article <310@cc4.bbn.com.BBN.COM> lfernand@cc4.bbn.com.BBN.COM (Louis F. Fernandez) writes:

+---------------
| +---------------
| | Isn't this what IBM uses in their Airline Control Program?  There was that
| | article in CACM a while back about the TWA reservation system, and it said
| | something about spreading files over a large number of spindles for greater
| | throughput.  | haynes@ucscc.ucsc.edu
| +---------------
| What ACP does isn't what we are calling disk striping in this newsgroup.
| ACP has an option to write each record to two different disks at the same
| time.  This doesn't increase throughput but does has several benefits...
+---------------

Sorry, the original poster is correct (sort of). ACP *does* have disk striping,
in addition to the redundant writing you mentioned, but still it isn't quite
the same as we are talking about here. They spread a file across several disks,
all right, but the allocation of records (all fixed length -- this is *old*
database technology!) is such that the disk drive number is a direct-mapped
hash of some key in the record! What this does is spread accesses to similar
records (like adjacent seats on the same flight) across many disks (sometimes
up to 100 packs!!!).

Kind of special purpose, really...

+---------------
| BTW, airline reservations systems have quite high performance.  Large ones
| (e.g., United and American) average over 1000 transactions per second,
| where a transaction is a few characters of keyboard input, a half-dozen
| disk access, and a few hundred characters of CRT output.  
| ...Lou | lfernandez@bbn.com | ...!decwrl!bbncc4!lfernandez
+---------------

True. In fact, they are reaching the limits of that technology. The United
Airlines system is approaching 2000 (!) transactions per second (as you
defined them -- correctly) during the peak busy hour.  Yikes!

But note that perfect consistency is *not* a constraint of airline rez
systems (as we all know ;-} ). They don't even have completely serialized
transactions or even real record-locking. It is considered "acceptable"
(for the sake of throughput) to drop or garble an occasional passenger
record once in a while... as long as it is really once in a long while.

On the other hand, crew flight records *must* be correct (because of FAA
legal limits on flight crew work hours and recording thereof), so the crew
scheduling sub-system *does* have "correct" behavior... but it pays a severe
penalty in performance.


Rob Warnock
Systems Architecture Consultant

UUCP:	  {amdcad,fortune,sun,attmail}!redwood!rpw3
ATTmail:  !rpw3
DDD:	  (415)572-2607
USPS:	  627 26th Ave, San Mateo, CA  94403

rchrd@well.UUCP (Richard Friedman) (08/10/87)

One more comment about disk striping:  there is a real limiting
factor to disk technology, and it is not the speed of light (the
limiting factor to CPU technology) but rather the speed of sound.
If you try to make a disk go too fast in an attempt to improve
transfer rates, you approach Mach 1 in the turbulent flow around
the surface of the disk, and the resulting shock wave destroyes
the disk, literally.   By spreading the data across many platters,
you achieve a kind of parallelism in I/O.  I have written some
software packages using asynch I/O on the Cray that attempt this
sort of thing and it is very successful for large blocks of data.
But there is always a trade-off.  Imagine a situation where a
transfer of a single record of a few million 8-byte words is
broken down into a simultaineous transfer of a number of 
partitions of this recordthis can be transferred asynchronously
so that each partition is running inparallel on its own disk.
Now you have improved the transfer rate, but you have also
increased the overall I/O "interference" and it may slow down
due to increased system activity!   There's no free lun}inch,
it seems.
}i
  
Richard Friedman    Pacific-Sierra Rsearch (Berkeley)
-- 
...Richard Friedman [rchrd]                         
uucp:  {ucbvax,lll-lcc,ptsfa,hplabs}!well!rchrd
- or -   rchrd@well.uucp

roy@phri.UUCP (Roy Smith) (08/11/87)

In article <3721@well.UUCP> rchrd@well.UUCP (Richard Friedman) writes:
> there is a real limiting factor to disk technology, and it is not the
> speed of light (the limiting factor to CPU technology) but rather the
> speed of sound.  If you try to make a disk go too fast in an attempt to
> improve transfer rates, you approach Mach 1 in the turbulent flow around
> the surface of the disk, and the resulting shock wave destroyes the disk,
> literally.

	So, why not put the disk in a vacuum?  We've got ultracentrifuges
all over the place here that run as fast as 100,000 RPM.  Of course, the
rotor chamber is kept under hard vacuum with a diffusion pump, and the
walls of the chamber have refrigeration coils to carry away the heat caused
by the few air molecules remaining, but why couldn't you do that do a disk
drive if you wanted to?

	Ultracentrafuges run at about 50 microns of vacuum; for a 36,000
RPM disk you wouldn't need anywhere near that; a few PSI would probably be
fine.  We've got 20,000 RPM centrifuges that run in atmosphere, but they
don't have delicate read/write heads and oxide surfaces to worry about, and
the condensation on the outside of the rotor from the refrigeration doesn't
bother them either.

	I understand that in big power plants, the insides of the
generators are filled with hydrogen instead of air because the speed of
sound is faster so the rotor tips don't go supersonic.  Apparantly, as long
as you keep oxygen away, there is no danger of explosion.  You could have
hydrogen-filled disk drives, I suppose.  Sure would make head crashes more
exciting! :-)
-- 
Roy Smith, {allegra,cmcl2,philabs}!phri!roy
System Administrator, Public Health Research Institute
455 First Avenue, New York, NY 10016

jmg@dolphy.UUCP (Jeffrey Greenberg) (08/12/87)

> In article <3721@well.UUCP> rchrd@well.UUCP (Richard Friedman) writes:
> > If you try to make a disk go too fast in an attempt to
> > improve transfer rates, you approach Mach 1 in the turbulent flow around
> > the surface of the disk, and the resulting shock wave destroyes the disk,
> > literally.
> 
> 	So, why not put the disk in a vacuum?

You'll lose the Winchester effect where the head is supported by the 
laminar flow floating close enough to get high recording density.

Why can't the head be placed very close to the disk in a vacuum... is
it a difficulty of producing a smooth disk surface at a resonable cost?

msf@amelia (Michael S. Fischbein) (08/13/87)

In article <155@dolphy.UUCP> jmg@dolphy.UUCP (Jeffrey Greenberg) writes:
>> In article <3721@well.UUCP> rchrd@well.UUCP (Richard Friedman) writes:
>> > improve transfer rates, you approach Mach 1 in the turbulent flow around
                                                        ^^^^^^^^^^^^^^
I wondered about this.

>> > the surface of the disk, and the resulting shock wave destroyes the disk,
>> > literally.
>> 
>> 	So, why not put the disk in a vacuum?
>
>You'll lose the Winchester effect where the head is supported by the 
>laminar flow floating close enough to get high recording density.
 ^^^^^^^^^^^^

OK, disk drive manufacturing gurus, which is it?
		mike

Michael Fischbein                 msf@prandtl.nas.nasa.gov
                                  ...!seismo!decuac!csmunix!icase!msf
These are my opinions and not necessarily official views of any
organization.

kent@xanth.UUCP (Kent Paul Dolan) (08/13/87)

In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>In article <3721@well.UUCP> rchrd@well.UUCP (Richard Friedman) writes:
>> there is a real limiting factor to disk technology, and it is not the
>> speed of light (the limiting factor to CPU technology) but rather the
>> speed of sound.  If you try to make a disk go too fast in an attempt to
>> improve transfer rates, you approach Mach 1 in the turbulent flow around
>> the surface of the disk, and the resulting shock wave destroyes the disk,
>> literally.
>
>	So, why not put the disk in a vacuum?  We've got ultracentrifuges[...]

>	Ultracentrafuges run at about 50 microns of vacuum; for a 36,000
>RPM disk you wouldn't need anywhere near that; a few PSI would probably be
>fine.[...]
>Roy Smith, {allegra,cmcl2,philabs}!phri!roy

Hmmm.  From the little I remember of disk technology, back when the
hardware side of things was still interesting to me, disk heads depend
on the air cushion for spacing from the platter surface.  The heads
are described as "flying" on a surface effect air cushion.  Without
that aerodynamic effect, head positioning would require much more
accurate mechanisms, and disk platters would have much smaller
manufacturing tolerances and more stringent materials requirements to
avoid original or age related "waves" in the surface.

Still, there ought to be some gain from drawing a partial vacuum, in
that the energy transfer from heads to surface would be less if less
air were being plowed out of the way above Mach 1, so that at some
level of vacuum, perhaps the remaining (supersonic) air would fly the
heads, but not destroy them or the platters.  Of course, there is
still the problem of (standing ?) waves between the platters and the
case...

Usual answer - more research required.  ;-)



Kent, the man from xanth.

cja@umich.UUCP (08/13/87)

In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>	I understand that in big power plants, the insides of the
>generators are filled with hydrogen instead of air because the speed of
>sound is faster so the rotor tips don't go supersonic.  Apparantly, as long
>as you keep oxygen away, there is no danger of explosion.

Yeh, I'll bet the designers of the Hindenburg had the same attitude.

Charles J. Antonelli           Phone:     313-936-9362
44 Adv Tech Laboratories Bldg  Internet:  cja@crim.eecs.umich.edu
The University of Michigan     Uucp:      {umix,mibte}!umich!eecs.umich.edu!cja
Ann Arbor, MI   48109-2210

weaver@prls.UUCP (Michael Gordon Weaver) (08/13/87)

About vacuum disk drives (warning: long winded).

As someone who has worked in a disk design lab (albeit as an aide),
I had to reply about the idea of vacuum disk drives. 

Magnetic disks have had flying heads since day one (c. 1954). For the 
first few years, the heads had air bearings, that is air was forced down
through holes in the head to keep the head off the disk platter. Since the 
sixties (not sure about the date), heads have flown on the air carried 
along with the platter.

Although the platter surface is quite smooth, it is not very flat from the 
head's point of view: the height difference as the head moves around one 
rotation is much, much greater than the gap between the head and the  
platter. The small gap is required for high bit densities. 

The name winchester comes from the IBM model 3030 DASD (disk), code named
Winchester because the number sounded like the nickname of the shotgun
manufactured by Winchester company. This drive was (I believe) the 
first to use 'self-loading' heads, which sat on the disk surface when
at rest. Previous disks required that the heads be unloaded from the 
surface whenever the disk was not up to its full speed, thus requiring
a more complicated carriage for the head assembly (comb). The name 
winchester was latter applied to all disks with self-loading heads,
and even latter to all hard disks used in microcomputers.

Disk drives in a vacuum would require very flat disks, and some way of 
keeping the heads at a precise distance from the platter. Also, the 
head-disk assembly would have to be gas tight and strong enough to 
withstand full atmospheric pressure. Finally, the type of bearings 
used for the spindle would probably leak lubricant into the vacuum,
so they would have to be changed.

Disk drive transfer rates are what they are for a number of reasons,
but one of the biggest is standardization. Most drive interfaces will
only work with one transfer rate, so disks are engineered with one
transfer rate in mind. Generally, total storage size is the most 
important characteristic for most customers, with head seek time 
being second, and transfer rate being third. If higher transfer 
rates were required by a large part of the market, the rotation
speed of the drives could be increased from the current typical
3600 rpm to say 10,000 rpm. This would be expensive, but much
cheaper than using a vacuum.

Computers below about 1 MIPS do not have sufficient memory bandwidth
to deal with tranfer rates above the 8 or 16 mega-bits per second rates
most popular today. High transfer rates requires otherwise unnecessary costs
for low performance machines, and so the market for higher transfer rates
is limited.

Disk controllers for the current transfer rates can be made of TTL  and 
bitslice (both work up to about 20 mega-bits per second). Going to a 
higher rate will increase the cost of these.

Supercomputer manufactuers and users cannot afford to develop their own
disks, since they do not have the volume of sales to divide the development
costs over. So they use stock drives, even though they could support higher
transfer rates.



-- 
Michael Gordon Weaver                   Usenet: ...pyramid!prls!weaver
Signetics Microprocessor Division
811 East Arques Avenue
Sunnyvale, California USA 94088-3409            Phone: (408) 991-3450

jlw@mtuxo.UUCP (J.WOOD) (08/14/87)

Disks cannot be put in a vacuum using current thinking since
the heads literally fly along the surface of the disk.  This
requires air molecules since the head shape is like an inverted
airfoil.  The head tries to fly into the disk.  Then when it gets
microscopically close ground effects take over and keep the head
from actually crashing into the disk surface (most of the time :-) ).
If this technology were to be abandoned in favor of say physical
rigidity holding the heads at the proper distance, then the tolerances
would mean that the gap would be much larger leading to a severe
loss of disk capacity since this closeness between head and
surface is what allows us to write such a thin strip of data
on the disk without interfering with adjacent tracks. Wider tracks
would also increase track to track access time.  Unless a substitute
for air could be found like the suggested Hydrogen or some
other mechanism like superconductor repulsion to get the
heads close to the surface without actually crashing, this looks
like a losing proposition.

Joe Wood
lznv!jlw

henry@utzoo.UUCP (Henry Spencer) (08/14/87)

>	So, why not put the disk in a vacuum?

Because when the books says that the heads "fly" above the disk surface,
it is being literal and not metaphorical.  The separation between heads
and surface is maintained aerodynamically, not mechanically.  So there
has to be some sort of gas around.

> 	I understand that in big power plants, the insides of the
> generators are filled with hydrogen instead of air because the speed of
> sound is faster so the rotor tips don't go supersonic...

This would probably work, perhaps with some design changes in the head
aerodynamics, but the explosion hazard would be much more serious in a
mass-produced quasi-consumer product that has to run safely (if not
necessarily reliably :-)) in the field for years without maintenance.
Big power plants can afford significant continuous effort to keep such
problems under control.
-- 
Support sustained spaceflight: fight |  Henry Spencer @ U of Toronto Zoology
the soi-disant "Planetary Society"!  | {allegra,ihnp4,decvax,utai}!utzoo!henry

dwc@homxc.UUCP (D.CHEN) (08/14/87)

In article <3721@well.UUCP>, rchrd@well.UUCP writes:
> 
> One more comment about disk striping:  there is a real limiting
> factor to disk technology, and it is not the speed of light (the
> limiting factor to CPU technology) but rather the speed of sound.
> If you try to make a disk go too fast in an attempt to improve
> transfer rates, you approach Mach 1 in the turbulent flow around
> the surface of the disk, and the resulting shock wave destroyes
> the disk, literally.   By spreading the data across many platters,

can't you seal the disk in a vacuum?  isn't that what happens in
winchesters?  i don't know, just asking.

> you achieve a kind of parallelism in I/O.  I have written some
> software packages using asynch I/O on the Cray that attempt this
> sort of thing and it is very successful for large blocks of data.
> But there is always a trade-off.  Imagine a situation where a
> transfer of a single record of a few million 8-byte words is
> broken down into a simultaineous transfer of a number of 
> partitions of this recordthis can be transferred asynchronously
> so that each partition is running inparallel on its own disk.
> Now you have improved the transfer rate, but you have also
> increased the overall I/O "interference" and it may slow down
> due to increased system activity!   There's no free lun}inch,
> it seems.

i was doing a quick analysis of disk stripping and made a small
observation:  with multiple disks, the average rotational latency
approches that of an entire rotation instead of 1/2 or 1/3 rotation
(i forget the numbers).

danny chen
homxc!dwc

dhesi@bsu-cs.UUCP (08/15/87)

In article <505@mtuxo.UUCP> jlw@mtuxo.UUCP (J.WOOD) writes:
[about the issue of running disk drives at much higher speeds]

>Disks cannot be put in a vacuum using current thinking since
>the heads literally fly along the surface of the disk.
>. . . .The head tries to fly into the disk.  Then when it gets
>microscopically close ground effects take over and keep the head
>from actually crashing into the disk surface (most of the time :-) ).

[and operating a disk drive in a vacuum would prevent the use of
airflow between the head and the disk surface from maintaining a
tight tolerance.]

I think the main limiting factor keeping disk drive speeds low is
the tremendous centrifugal/centripetal force that acts at high speeds.
I've used ultracentrifuges that run at 60,000 rpm and above, and the
rotors have to be *very* carefully machined.  Even small scratches can
lead to spots of strain that will eventually cause the rotor to break
up.  I've also seen what the inside of an ultracentrifuge looks like
after a rotor breaks up -- you don't want to be nearby when that
happens, just in case the half-inch steel casing doesn't hold up
(though it's designed to).

Ultracentrifuge rotors are kept small.  All the ones I've seen that
were used at high speed were about 8 to 10 inches in diameter.  I
suspect anything much larger would be awfully difficult to spin fast
without breaking up.

Even if you could make the disk platter strong enough, I wonder what it
would do to the magnetic coating to be subjected to a lot of Gs.  Might
it not want to flow a little towards the edge of the platter?

Finally, some speculation.  We know that "information", which we
normally think of being a purely abstract quantity, is actually closely
related to entropy and therefore to heat, which we can actually feel.
I wonder if "information" has mass too?  Just a little?  Enough that
bits of information would tend to move towards the outer tracks?  This
may sound like a crackpot idea, but don't forget, there was a time when
they would have called you crazy if you had said that light had mass.
-- 
Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi

chuck@amdahl.UUCP (08/15/87)

>>	So, why not put the disk in a vacuum?
>
>heads "fly" above the disk surface,

Well, with all this discussion about mach 1 being the limiting factor
for conventional disk drives, one has to ask, "But what about optical
disk drives?"  Surely an optical disk drive could be designed where
the mirror scans across sectors and tracks at faster than mach 1?

>Support sustained spaceflight: fight |  Henry Spencer @ U of Toronto Zoology
>the soi-disant "Planetary Society"!  | {allegra,ihnp4,decvax,utai}!utzoo!henry

Yeah, what he says.  Let's develop    |  Chuck Simmons @ Amdahl
the moon before we terraform Mars.    |  amdahl!chuck

eugene@pioneer.arpa (Eugene Miya N.) (08/19/87)

Well, I'm glad that all of you Good Bodies find Disk Striping
interesting.  Now, if one or two of you could just go off and build
something, we would be very interested in buying somethin.  I think
we could easily get you a few dozen customers.  Would be nice if it
interfaces with Crays, transfers in excess of 1 GB/sec, etc. etc......
Oh yeah, should not cost more than a Cray itself and hold, say,
1 Terabyte of data (100 GWords? yeah, okay for now 8-).  I'm I
forgetting anything?

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

greg@xios.XIOS.UUCP (Greg Franks) (08/19/87)

In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes about
operating disks in a vacuum to get around supersonic effects and the
nasty consequences thereof.  I always thought that disk heads "flew"
over the media.  Removing the air would make vertical head positioning
somewhat challenging.  I also believe that one wants to get the heads as
close to the media as possible to obtain the best recording density. 

#	I understand that in big power plants, the insides of the
#generators are filled with hydrogen instead of air because the speed of
#sound is faster so the rotor tips don't go supersonic.  

I think power generators use hydrogen for cooling purposes more than
anything else.  I could be wrong - It's been awhile since I've been
around 500 Megawatt generators.  (They turn at 3600 RPM, so if someone
wants to get inspired and calculate the speed of the rotor tips, feel
free!)
-- 
Greg Franks             XIOS Systems Corporation, 1600 Carling Avenue,
(613) 725-5411          Ottawa, Ontario, Canada, K1Z 8R8
seismo!mnetor!dciem!nrcaer!xios!greg        "Vermont ain't flat!"

fpst@hubcap.UUCP (Dennis Stevenson) (08/19/87)

in article <2533@ames.arpa>, eugene@pioneer.arpa (Eugene Miya N.) says:
> 
> Well, I'm glad that all of you Good Bodies find Disk Striping
> interesting.

Me,too.  And while we're at it, I'd like to see some performance
modeling of the beasties.  Checking with the performance types, it
appears that there is precious little in that regard.
-- 
Steve Stevenson                            fpst@hubcap.clemson.edu
(aka D. E. Stevenson),                     fpst@clemson.csnet
Department of Computer Science,            comp.hypercube
Clemson University, Clemson, SC 29634-1906 (803)656-5880.mabell

pml@casetek.casetek.UUCP (Pat Lashley) (08/21/87)

In article <164@umich.UUCP> cja@crim.eecs.umich.edu.UUCP (Charles J. Antonelli) writes:
>In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
>>	I understand that in big power plants, the insides of the
>>generators are filled with hydrogen instead of air because the speed of
>>sound is faster so the rotor tips don't go supersonic.  Apparantly, as long
>>as you keep oxygen away, there is no danger of explosion.
>
>Yeh, I'll bet the designers of the Hindenburg had the same attitude.
>

Actually, the designers of the Hindenburg wanted to use Helium, but
the US government would not allow sales of US Helium to Germany.  At
that time recent discoveries of helium under Texas gave the US the
largest known supply of helium in the world.  (It might still be, I
left my Britannica at home today... :-)


-- 
Internet:	casetek!patl@sun.com		PM Lashley
uucp:		...sun!casetek!patl		CASE Technology, Inc.
arpa:		casetek@crvax.sri.com		Mountain View, CA 94087
>> Anyone can have the facts; having an opinion is an art. <<

grr@cbmvax.UUCP (George Robbins) (08/21/87)

In article <164@umich.UUCP> cja@crim.eecs.umich.edu.UUCP (Charles J. Antonelli) writes:
> In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
> >	I understand that in big power plants, the insides of the
> >generators are filled with hydrogen instead of air because the speed of
> >sound is faster so the rotor tips don't go supersonic.  Apparantly, as long
> >as you keep oxygen away, there is no danger of explosion.
> 
> Yeh, I'll bet the designers of the Hindenburg had the same attitude.

Uh, I think this has more to do with the properties of Hydrogen as a coolent
gas rather than anything to do with the speed of sound.  I think the idea is
to blow the hydrogen through a hollow rotor where there are some engineering/
efficiency gains to be had from keeping things compact.  Where are all those
power engineers when you need them? 8-)
-- 
George Robbins - now working for,	uucp: {ihnp4|seismo|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@seismo.css.GOV
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

grr@cbmvax.UUCP (George Robbins) (08/21/87)

In article <3721@well.UUCP> rchrd@well.UUCP (Richard Friedman) writes:
> 
> One more comment about disk striping:  there is a real limiting
> factor to disk technology, and it is not the speed of light (the
> limiting factor to CPU technology) but rather the speed of sound.
> If you try to make a disk go too fast in an attempt to improve
> transfer rates, you approach Mach 1 in the turbulent flow around
> the surface of the disk, and the resulting shock wave destroyes
> the disk, literally.

Yeah, but my calculations indicated that a 14" drive has a peripheral
velocity of "only" 150 miles/hour.  If this turbulence/mach 1 thing
was currently a limiting factor, we could have some of these new 8"
drives spinning 8 times as fast, no?

There may well be some head/media engineering problems associated with
going faster, but the limiting factor has traditionally been in the
head magnetics and drive electronics coupled with limited i/o bandwidth
on the systems that constitute the volume uses of the drives.

Now, as others have pointed out, parallel transfer drives are available,
from Fujitsu and new CDC.  They aren't cheap, because the volume is much
lower and you need n times the drive electronics in the read/write path
as there is usually only a "preamp" associated with each head, then common
amplification, shaping, data-separation circuitry in normal drives.  Also
I imagine you need some bit-deskew logic to insure that the right bits
really come out in parallel.

Now I'm sure Fujitsu could have made the drive spin some percentage faster,
but in doing so, the would have lost much of the parts interchangability
with their volume drives, which would only make things even more expensive.
-- 
George Robbins - now working for,	uucp: {ihnp4|seismo|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr@seismo.css.GOV
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

ww0n+@andrew.cmu.edu (Walter Lloyd Wimer, III) (08/21/87)

> From: cja@umich.UUCP (Charles J. Antonelli)
> Subject: Re: Disk Striping (description and references) plus class brief
> 
> In article <2838@phri.UUCP> roy@phri.UUCP (Roy Smith) writes:
> >       I understand that in big power plants, the insides of the
> >generators are filled with hydrogen instead of air because the speed of
> >sound is faster so the rotor tips don't go supersonic.  Apparantly, as long
> >as you keep oxygen away, there is no danger of explosion.
> 
> Yeh, I'll bet the designers of the Hindenburg had the same attitude.
> 
> Charles J. Antonelli           Phone:     313-936-9362
> 44 Adv Tech Laboratories Bldg  Internet:  cja@crim.eecs.umich.edu
> The University of Michigan     Uucp:      {umix,mibte}!umich!eecs.umich.edu!c\
> ja
> Ann Arbor, MI   48109-2210
> 


Then why not use helium instead?  It's mass is about four times that of
hydrogen, and I don't know the speed of sound in each gas offhand, but it
might just work. . . .


Walt Wimer
Carnegie Mellon University

Internet:  ww0n+@andrew.cmu.edu
Bitnet:    ww0n+%andrew.cmu.edu@cmuccvma
UUCP:      ...!{seismo, ucbvax, harvard}!andrew.cmu.edu!ww0n+

johnw@astroatc.UUCP (John F. Wardale) (08/21/87)

In article ??? (Walter Lloyd Wimer, III) writes:
>Then why not use helium instead?  It's mass is about four times that of
>hydrogen, and I don't know the speed of sound in each gas offhand, but it
>might just work. . . .
1:  Hydrogen (H2) has mass 2   Helium [its NOT He2] has mass 4
2:  I think the formula uses squre-root of the gas's mass.
	so H2 is only 40% better than He which is a ~2 times better than air
---------------------------------------------

		Advancing Disk Tecnology

I fear that disk makers can not afford to design an exotic 8" He
disk that spins at .8 Mach  (3600rmp=.2mach at 14" ... 3600
*3[size] *5[mach1] *2 for He *.8[tolerences and slop
give a speed of 86,400rpm or 1440rps 

Figure a 3year design cycle and 5 year life.  Said unit will have
to compete with DRAM memorys of 5 to 8 years from now!
How much would said disk be worth in an age where 16M-bit
chips are old-hat and people have 64M-bit chips working "in the
lab"  ???   (that's only 128 chips per GIGA-byte!!!!)

When did someone first say "The days of rotating magnetic memory
are about to end" ??  5, 10, 15 years ago ???

Does anyone else agree that this *WILL* happen in the next 5 or so
year???

-- 
					John Wardale
... {seismo | harvard | ihnp4} ! {uwvax | cs.wisc.edu} ! astroatc!johnw

To err is human, to really foul up world news requires the net!

ken@argus.UUCP (Kenneth Ng) (08/22/87)

In article <2239@cbmvax.UUCP>, grr@cbmvax.UUCP (George Robbins) writes:
[edited discussion on hydrogen in power plants for speed purposes]
: Uh, I think this has more to do with the properties of Hydrogen as a coolent
: gas rather than anything to do with the speed of sound.  I think the idea is
: to blow the hydrogen through a hollow rotor where there are some engineering/
: efficiency gains to be had from keeping things compact.

Possibly correct, doesn't the IBM TCM (Thermo Conduction Module? ) use
helium to better conduct heat away from the chips?


Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey  07102
uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp ***
bitnet(prefered) ken@orion.bitnet

chuck@amdahl.amdahl.com (Charles Simmons) (08/23/87)

In article <414@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>Figure a 3year design cycle and 5 year life.  Said unit will have
>to compete with DRAM memorys of 5 to 8 years from now!
>How much would said disk be worth in an age where 16M-bit
>chips are old-hat and people have 64M-bit chips working "in the
>lab"  ???   (that's only 128 chips per GIGA-byte!!!!)
>
>When did someone first say "The days of rotating magnetic memory
>are about to end" ??  5, 10, 15 years ago ???
>
>Does anyone else agree that this *WILL* happen in the next 5 or so
>year???
>-- 
>					John Wardale
>... {seismo | harvard | ihnp4} ! {uwvax | cs.wisc.edu} ! astroatc!johnw

Correct me if I'm wrong, but...
I was under the impression that not only did silicon memory double
in capacity every two years, but magnetic memory also doubled in
capacity every two years as well.  This would make it difficult for
silicon memory to catch up to magnetic memory.  Unless, of course,
you're really saying there is an upper bound on the amount of memory
one can use...

Chuck
amdahl!chuck

phil@amdcad.AMD.COM (Phil Ngai) (08/23/87)

In article <414@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>Figure a 3year design cycle and 5 year life.  Said unit will have
>to compete with DRAM memorys of 5 to 8 years from now!
>How much would said disk be worth in an age where 16M-bit
>chips are old-hat and people have 64M-bit chips working

Come on John, don't be stupid. Just because a bathtub and a toilet
both hold water doesn't mean you use them for the same things.  You
must have temporarily forgotten the difference between volatile and
nonvolatile storage. 

-- 
I speak for myself, not the company.

Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or amdcad!phil@decwrl.dec.com

johnw@astroatc.UUCP (John F. Wardale) (08/24/87)

In article <18018@amdcad.AMD.COM> phil@amdcad.UUCP (Phil Ngai) writes:
>In article <414@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>>  that moderen disk designs must compete with RAMs and/or RAM-disks
>
>Come on John, don't be stupid. <flaming comments deleted>
>You must have temporarily forgotten the difference between volatile
>and nonvolatile storage. 

Sorry Phil (and others via e-mail), but I *DO* know that RAMs are
volatile.

I didn't say that disks will die tomorrow, but in 2-4 years I
assume RAM disks (with *GOOD* power backup systems) will easily be
trustable for days!  (Anyone who doesn't backup his/her disks (RAM
or magnetic) to tape (or preferabely optical-write-only-disks) is
really asking for trouble!!!

I think that in 3-5 years the common trend will be optical (WO)
disks for source, binaries, and backups, while all of /tmp and
most development will use RAM disks.  Likely the RAM-disks will be
on semi-private workstations and the optical-WO's on some kind of LAN.

-- 
					John Wardale
... {seismo | harvard | ihnp4} ! {uwvax | cs.wisc.edu} ! astroatc!johnw

To err is human, to really foul up world news requires the net!

johnw@astroatc.UUCP (John F. Wardale) (08/25/87)

In article <12718@amdahl.amdahl.com> chuck@amdahl.amdahl.com (Charles Simmons) writes:
>In article <414@astroatc.UUCP> I wote:

>> RAM vs disk stuff

>Correct me if I'm wrong, but...
>I was under the impression that not only did silicon memory double
>in capacity every two years, but magnetic memory also doubled in
>capacity every two years as well.

I'm not sure about disk capacity .... anyone else care to comment?

Current disk speeds are very close (factor of <=5 ??) to the speeds of
10 or 20 year old disks.

My comments were in regard to disk *SPEED*  (Stripping, parrallel
head disks....)

As RAMS get larger, it gets more practical to build reasonable
sized RAM-disks (they may require more volume, and have higher $$/Mbyte 
(if your claim is true), but could be a great deal if you need XX
Mbytes at a certain (high) speed!!  

As I look into my crystal ball, I see RAM and optical (WO) disks
replacing magnetic memory within 5 years...(say, ~~~  30% of total, 
and 90% of new systems/designs)

-- 
					John Wardale
... {seismo | harvard | ihnp4} ! {uwvax | cs.wisc.edu} ! astroatc!johnw

To err is human, to really foul up world news requires the net!

jthomp@convexs.UUCP (08/26/87)

/* Written  3:10 pm  Aug 14, 1987 by dwc@homxc.UUCP in convexs:comp.arch */
/* ---------- "Re: Disk Striping (description and" ---------- */
In article <3721@well.UUCP>, rchrd@well.UUCP writes:
> 
> One more comment about disk striping:  there is a real limiting

can't you seal the disk in a vacuum?  isn't that what happens in
winchesters?  i don't know, just asking.

nope, no air, no head gap ---> no disk....

> you achieve a kind of parallelism in I/O.  I have written some
> software packages using asynch I/O on the Cray that attempt this
> sort of thing and it is very successful for large blocks of data.

i was doing a quick analysis of disk stripping and made a small
observation:  with multiple disks, the average rotational latency
approches that of an entire rotation instead of 1/2 or 1/3 rotation
(i forget the numbers).

Not if you configure things correctly!
someone at your site should use 'tunefs' to put the correct numbers
for the 'rotdelay' parameter...

(in a super-secret project here, we're making dump go fast, fast, fast.)
(I can't give numbers now...  but its GREAT!)

Jim Thompson (jthomp@convex)

jthomp@convexs.UUCP (08/26/87)

/* Written  9:54 pm  Aug 18, 1987 by eugene@pioneer.UUCP in convexs:comp.arch */
Well, I'm glad that all of you Good Bodies find Disk Striping
interesting.  Now, if one or two of you could just go off and build
something, we would be very interested in buying somethin.  I think
we could easily get you a few dozen customers.  Would be nice if it
interfaces with Crays, transfers in excess of 1 GB/sec, etc. etc......
Oh yeah, should not cost more than a Cray itself and hold, say,
1 Terabyte of data (100 GWords? yeah, okay for now 8-).  I'm I
forgetting anything?


Well, here at Convex, we have, guess what!  Disk Striping!
(Hey!)

Current 'stripe' limit is 2Gig, but you can have lots of them.
(sign extention prevents a bigger size.)

And, guess what, it costs less than a Cray.

This should not be confused with an advertisment.  Rather that someone
has done it.

chuck@amdahl.amdahl.com (Charles Simmons) (08/26/87)

In article <420@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>In article <12718@amdahl.amdahl.com> chuck@amdahl.amdahl.com (Charles Simmons) writes:
>>I was under the impression that not only did silicon memory double
>>in capacity every two years, but magnetic memory also doubled in
>>capacity every two years as well.
>
>My comments were in regard to disk *SPEED*  (Stripping, parrallel
>head disks....)

Doesn't increased capacity tend to imply faster disk speeds?  Faster
seek times in relation to the amount of data?  Faster bit transfer
times since the speed of the disk stays the same, but the number of
bits encountered by the head during a rotation increases?

>As RAMS get larger, it gets more practical to build reasonable
>sized RAM-disks (they may require more volume, and have higher $$/Mbyte 
>(if your claim is true), but could be a great deal if you need XX
>Mbytes at a certain (high) speed!!  

Yep...  I can't argue with this seeing as how the company I work for
sells a 1Gbyte ram disk.

>As I look into my crystal ball, I see RAM and optical (WO) disks
>replacing magnetic memory within 5 years...(say, ~~~  30% of total, 
>and 90% of new systems/designs)
>
>					John Wardale
>... {seismo | harvard | ihnp4} ! {uwvax | cs.wisc.edu} ! astroatc!johnw

Now I could see optical (read/write) disks taking over a substantial
portion of the market.  They don't seem to be intrinsically much more
expensive than magnetic disks.  However, its not clear that ram will
become as inexpensive as magnetic disks.  If ram is two orders of
magnitude faster and more expensive than magnetic (or optical) disks,
from a price/performance point of view, there is an optimal amount of
ram to buy to cache accesses to disk.  A ram cache nearly a thousand
times smaller than the disk farm can provide performance close to 90%
of having a ram disk farm, but the cost would be far less.

-- Chuck

jones@fortune.UUCP (08/27/87)

In article <420@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes:
>Current disk speeds are very close (factor of <=5 ??) to the speeds of
>10 or 20 year old disks.

Actually, in the 5 1/4" disk product world, disk access speeds have
increased by more than a factor of 100 when compared with respect 
to capacity.  And, it doesn't much matter whether you are referring
to track-to-track, average access, or maximum access times.

>As RAMS get larger, it gets more practical to build reasonable
>sized RAM-disks (they may require more volume, and have higher $$/Mbyte 
>
>As I look into my crystal ball, I see RAM and optical (WO) disks
>replacing magnetic memory within 5 years...(say, ~~~  30% of total, 
>and 90% of new systems/designs)
>
I'll take some of that money.  Concerning ram disks, a MByte of DRAM
draws somewhat more than 0.5 amps of +5VDC.  Concerning optical
storage, thermal effects are very slow and, while optomagnetics may
save the day, it has been very slow in coming.  Concerning magnetic
storage, vertical recording alone can increase densities by a factor
of 5.

Dan Jones

'Tis with our judgements as our watches, none
Go just alike, yet each believes his own.	Alexander Pope

mangler@cit-vax.Caltech.Edu (System Mangler) (09/03/87)

In article <2239@cbmvax.UUCP>, grr@cbmvax.UUCP (George Robbins) writes:
> Uh, I think this has more to do with the properties of Hydrogen as a coolent
> gas rather than anything to do with the speed of sound.

Less drag (big generators are 99% efficient!)
non-oxidizing (extends life of insulation & lubricant at high temperature)
high thermal conductivity

The very best power generators use helium, at 4 degrees Kelvin.

Don Speck   speck@vlsi.caltech.edu  {amdahl,rutgers}!cit-vax!speck

mangler@cit-vax.Caltech.Edu (System Mangler) (09/03/87)

In article <12718@amdahl.amdahl.com>, chuck@amdahl.amdahl.com (Charles Simmons) writes:
> I was under the impression that not only did silicon memory double
> in capacity every two years, but magnetic memory also doubled in
> capacity every two years as well.

Dynamic RAM chips have gone from 1 kilobit in 1971 to 1 megabit in 1986,
a doubling time of 1.5 years.  Disks take more like 4 years to double in
capacity.  (They are simultaneously getting more compact and cheaper).

The cross-over point should be about 1995, when a 64-Mbit RAM chip
will be an inch square and cost $2, while a 2-gigabyte 3.5-inch disk
will cost $2000.

Don Speck   speck@vlsi.caltech.edu  {amdahl,rutgers}!cit-vax!speck

halp@hammer.WV.TEK.COM (Hal Porter) (08/30/89)

    I'm looking for references on disk striping. I'm sure this has been
    hashed around in the recent past and I'm hoping someone compiled a list
    of references.

    All will be appreciated. Please reply vi email.

    Hal Porter
    halp@pong.wv.tek.com
    ...!tektronix!orca!halp