[net.unix-wizards] Ram disks

jdi@psuvax.UUCP (07/28/83)

	Here at PSU, we have an 11/34A which, although running a very nice
(local 4.1BSD compatible) version of UNIX, has one problem:

	it is s.......l.......o.......w........

	In an attempt to remedy that situation we have tried buying a $1500
2K cache memory.  Wham.  $1500 down the tube.  Speed improvement: maybe 5%.

	Basically we have figured our problem to be this: Unix uses disks.
(notice the 's').  In fact, it uses lots of disks.  As many as possible.  It
eats them for breakfast, lunch, and dinner.  Unfortunately, our budget
is something around the level needed to buy a floppy drive, so we have not
had the opportunity to buy a 890 GIGI-byte drive like all you other rich
people.

	And thus we come to the real subject of this acticle, Ram Disks.  Our
local system-god (typical hacker, large, brilliant, erratic, speaks only C..)
saw an add in a recent computer magazine, which he promptly misplaced, for
something called a Ram-Disk.  Doubtless all you CPM type people already know
that a Ram-Disk is a bunch of high-speed ram (1 MEG or more) in a box, which
is designed to look like a disk drive to the cpu.  Obviously in UNIX this
would be a great device!  Imagine, swap a 100K program in 1 second!  Imagine
(supposing you have enough ram to fit /tmp on too..)  vi'ing a file and having
it start up instantaneously.  (Yes, vax people, I know it already does on an
11/780, but on an 11/34A.....)

	And so I began purusing magazines in search of this mythical dragon-
slayer.  To no avail.  I guess rich business types use VMS, which only needs
one disk (but then we all know how (cough, cough) Nice VMS is!).  And so I
decided to see if any of you net.people might know anything about Ram-Disks.

	Please respond.  Right now when our system use goes up our only RK07
starts shaking so bad it moves it across the floor and throws things off
itself like a raging demon.  (I always did let my Dnd roots take hold of me.)


	Personally I think "Thanks in advance" is about as cute a phrase as
	such favourites as "peachy keen" and other such grade-school vocabulary,
	so I don't think I'll use it.

		-- John Irwin
		   The Pennsylvania State University

		   {burdvax, allegra}!psuvax!jdi

scw@ucla-locus@cepu.UUCP (07/29/83)

From:  Steve Woods <cepu!scw@ucla-locus>

National Semi Conductor has a device (NSC NURAM) and a multi device
controller ( HEXACON controller) that also supports a CDC 9762, as a
RM02; a Cipher Streamer, as a TU-10 and the NURAM,as a RS04.

National Semi Conductor.
(408) 736-6994


Dataram Corp.
sells a series of semi-conductor/core disk emulators w/controllers:
emulation	semi	core
RF-11		BS-202	BC-202
RJS03/4		BS-204	BC-204

these last are also available for Qbus systems.

ron@brl-bmd@sri-unix.UUCP (07/30/83)

From:      Ron Natalie <ron@brl-bmd>

We've been so pleased with "RAM" disks that we've ordered
one for practically all our machines.  I should note however,
that we haven't bothered trying it with 34's.  We do use it
for our 11/70's and VAX's.  Since we figured we didn't swap
all that much we use it primarily for /tmp.  Sure speeds up
C compiles.  One system also uses one for the root, which contains
/bin, /lib, and /etc (/tmp is on another RAM and /usr is on disk).
That helps a lot too.

What we use are DATARAM bulk core and bulk moss.  We use the
one bulk core for the root on the one machine and the mos units
everywhere else.  The mos units are available with battery backups.
The DATARAM MOS units also come with dual unibus ports (although
we only use one) and built in error logging and diagnostic routines.
They will emulate either DEC RF or RS fixed head disks.

I have had some exprerience with AMPEX's MEGASTORE and a little with
a thing called a MAXIRAM.  The main problem with these is that the
RS04 is a little more difficult to program if you would like to treat
each collection of 1Mbyte logical drives as a larger single unit.
The RF style controller automatically (because the old DEC hardware
it is emulating) knows how to switch from drive to drive automatically.

The reliability of the Dataram has been fairly high, and so was the AMPEX.

-Ron

P.S.  Anybody thought of putting /bin in ROM?

phil@amd70.UUCP (08/03/83)

Suppose you bought a few more megabytes of RAM for your computer
and increased the number of block buffers. Would this be as good
as a RAMDISK? Why?

jbray%bbn-unix@sri-unix.UUCP (08/03/83)

From:  James Bray <jbray@bbn-unix>

If one could force write-thru for data-integrity purposes, it might be
interesting to consider somehow putting one between cpu and drive as a
giant cache. Is there any version of these things that would allow this?

ron@brl-bmd@sri-unix.UUCP (08/04/83)

From:      Ron Natalie <ron@brl-bmd>

The PDP 11/70 systems I described already had maximum memory, and we
have block buffers, inodes, and clists buffered outside the kernel
address space.  What we were looking for was some extra performance
under some previously unturned stone.

If you are going to use RAM disk for paging/swap, byte for byte pairing
it with more main memory probably show that RAM disk is not the way to
go.  I spent a lot of time at Martin Marietta convincing people that
RAM disk was not main memory.  Using it to buffer large common images
(RSX-11) was not the way to go.  It becomes of use when you are using
data that must be stored in disk form.  Some of our database key files
for that application and as I described, popular UNIX directories for
our current application are what you want these for.  You probably do
not want them to extend virtual memory (paging/swapping).  We do get
close to the same performance (I think) by utilizing paging/swapping
area on disks that are isolated from the rest of our system I/O.
Most of the 11/70's swap on RK05's that have their own controller (the
additional drives on this controller are seldom used).  The VAX's
are configured so that paging space is on different drives or controllers
if that is possible from the most active file systems.

-Ron

edhall@rand-unix@sri-unix.UUCP (08/04/83)

						      You probably do
    not want them to extend virtual memory (paging/swapping).  We do get
    close to the same performance (I think) by utilizing paging/swapping
    area on disks that are isolated from the rest of our system I/O.
    Most of the 11/70's swap on RK05's that have their own controller (the
    additional drives on this controller are seldom used).

Whoa!!  I thought that swapping is best done on your FASTEST disks.
An RK05 is SLOW, no matter whether it is exclusively used for swapping
or not.

I once experimented with this on a PDP-11/45 running V7 by moving
swapping from an AMPEX-980 (with a one-of-a-kind controller) to an
RK05.  Even though the AMPEX supported all filesystems as well as
usually serving as swap, moving swap off to the RK05 slowed things
down considerably; I immediately received a barrage of complaints as
to how the already slow system had gotten much slower.

Admittedly, an 11/70 with large memory is not going to be swapping
as much as this 11/45 was.  But speeding up the swapping process can
be quite significant on a system that needs to swap.  And on a paging
system, disk I/O speed can be even more important.  How a `ram disk'
is best used depends upon the system and what it is used for.

		-Ed Hall
		edhall@rand-unix
		{ucbvax,decvax}!trw-unix!randvax!edhall

ron%brl-bmd@sri-unix.UUCP (08/04/83)

From:      Ron Natalie <ron@brl-bmd>

Yes, RAM disk is an advantage to swapping when it is >> than the
amount of free memory you can get.  But most RAM disks are about 2Mbytes.
How big is your swap space?

-Ron

jfw%mit-ccc@BRL.ARPA (08/05/83)

About disk speeds:  your swapping disk wants to have a high transfer rate,
but can have slow access time:  you are going to tell it to find sector 3216
and transfer 50Kb.  Your filesystem disks want to have fast access time, but
can have slow transfer rates (relatively), because you are only going to do
512 byte (1K) transfers, in general.  I ran through this exercise when I found
an old RF-11 disk in our junk heap.  Though I thought it would be great for
swapping (relieve our tired CDC disk!), I discovered that it's fast access
time (it is a fixed-head head-per-track disk) and miserably slow transfer rate
would have made it adequate for filesystem use (expected throughput equal to our
CDC 9762), but would have been miserable for swapping -- the break-even point
was exactly 512 bytes...

At Lincoln Lab's now-defunct Applied Seismology Group, we had a "memory disk"
which consisted of the last .5M of our 11/44 address space.  It was used for
a couple of programs which were quite disk intensive of temporary files.

bstempleton@watmath.UUCP (Brad Templeton) (08/05/83)

I have thought for some time it would be nice to see more use of
cheap ram to speed things up a bit.  We have all seen discussion
of using ram in /tmp for compiles, but why not have something the
user can control.  Essentially it's just like a register declaration,
this is a hit to the buffer program that these files are going to be
high use and how about trying to keep them in ram.
Compilers could "ram" and "unram" their temp files.  Users of
single user systems could "ram" the files they are currently working
on.

Another very simple idea would be a /dev/freshram device.
This would, whenever opened, give you a fresh buffer of ram, different
each time, which gets paged to disk only if too big.
You would use it just like a normal file (read, write, seek, stat etc.)
and you could pass the file descriptor to your kids.  (a handy way
of passing large amounts of data quickly to kids without a pipe that
goes to disk anyway.  It would of course be nice to "ram" a pipe, too.


-- 
	Brad Templeton - Waterloo, Ont. (519) 886-7304

arpaftp@cmcl2.UUCP (Arpanet Ftp) (08/06/83)

		Files/Disks Considered Obsolete
		----------- ---------- --------

I cannot understand why so many people seem delighted by the idea of using
RAM as a disk.  It seems a step backwards to me.  RAM is designed to
be accessed very quickly, one word at a time, in a manner that does not
depend on the address of the previous access.  It is much more flexible and
simpler to use than a disk, which must be told all sorts of garbage like
cylinder and track number of the desired data and insists on transferring an
integral number of blocks, whether you need it or not. Consequently hundreds to
thousands of CPU instructions may be executed for each access of a disk (and
up to two system calls).  This is why disk accesses are done by the kernel
while RAM accesses are performed by user programs.
To package a RAM inside of hardware that makes it as complicated to use as a
disk is idiotic in the absence of other constraints (such as no more address
space left to put the RAM on the machine directly).

The only advantage of disks over RAMs is that they can transfer large amounts
of data without the attention of the CPU, except at startup and completion.
If this is the feature that you want, package the RAM inside of hardware that
implements it, but don't make it look like a disk, for Pete's sake.  Or
better yet, switch to a CPU that has an interruptable block-move instruction
(the IBM 370 does, I've been told), which gives you this feature for ALL of
your RAM and allows user programs to perform the accesses with no system calls.
Another alternative is to design a peripheral whose sole function is to copy
blocks of main memory from place to place.  Using DMA, it operates in parallel
with the CPU.  Again this obtains the desired feature for ALL of your RAM and,
better, is faster and does not require changing CPU's.

And as for programs and kernel code (e.g. pipes) that use disk files where,
on a virtual memory system, they should be using virtual memory -- rewrite
them.  A program that is to be portable and needs to manipulate large amounts
of temporary data should encapsulate every access to that data inside functions
and provide (with, say conditional compilation) two versions of those functions:
one that uses program variables and one that uses files, so that the
appropriate choice can easily be made on any system.

There is a better way.  Why not have a programming language feature (analogous
to the *packed* attribute of Pascal) whereby a variable of any type is
declared to be non-time-critical, allowing the compiler to use a disk file
for the variable if deemed useful.  Further, we would allow the programmer
to specify, more strongly, that the variable MUST be allocated on a file, in
which case the name of the file could be given as well.  Assuming the language
has assignment for arbitrary structured data types and has some sort of
non-homogenous sequence data type, it now has no need of explicit file types
and operations.  (And believe it or not Virginia, there are programming
languages that allow the assignment of non-scalar types!)  I proposed this
for Pascal several years ago in SIGPLAN, but nobody was listening (basically
I just argued that the word *file* should be taken to mean *sequence*, not:
*use-a-disk*, and that the word *slow* be prefixed to any type declaration to
indicate that slow mass-storage may be used).  Now, since there are no longer
any files, hence no fd's, standard I/O is accomplished through predefined
sequence variables, e.g. the INPUT and OUTPUT variables, in Pascal.

What I am proposing, then, is to reverse this dangerous, wrong-thinking notion
of using RAMs disguised as disks for /tmp files which should have been RAM
variables anyway (actually, the reverse of the user-level analogue, using files
like variables for reasons not related to the problem):  Let's demand a
programming language where we can use files as flexibly and powerfully as
faster variables.  (I'll design it, if one of you will implement it on Unix.)
I hereby challenge any or all of you to defend the presence of file operations
in a programming language like the one I described, by telling me what you
couldn't do with the *slow* variables that you could do with explicit file
operations.  Please mail responses to me -- I have a sincere research interest
in your opinion.  DO NOT USE the r[eply] command; write to one of:

		...!cmcl2!acf2!condict
		...!decvax!cmcl2!acf2!condict
		...!philabs!cmcl2!acf2!condict

					Michael Condict     (212) 460-7239
					C.I.M.S., New York U.
					251 Mercer St.
					New York, NY    10012

kent%Shasta@su-score@decwrl.UUCP (08/08/83)

From:  Chris Kent <decwrl!kent%Shasta@su-score>

I think we're about to re-invent the concept of the one-level store,
first proposed (as far as I can remember) by Dennis and Van Horn in
their 1966 CACM paper, "Programming Semantics for Multiprogrammed
Computations", CACM, v9n3, March 1966, recently reprinted in the CACM
25th anniversary edition.

If you've never read this paper, go out and do it now. It's an amazing
collection of foresight and forethought.

chris
----------

gwyn@brl-vld@sri-unix.UUCP (08/13/83)

From:      Doug Gwyn (VLD/VMB) <gwyn@brl-vld>

I think your point about disk files not being a natural programming
construct, and the suggestion about a replacement concept, are both
good ideas.

Disks DO have one advantage over RAM, although not from the programming
language viewpoint, and that is that their cost per bit is much lower.
I think we will always have multi-level storage schemes because of the
economic considerations.  It WOULD be nice if programs didn't
unnecessarily have a particular partitioning built into them.

I would like to see the specification of time criticality taken a
step farther, and specify the maximum (average and/or worst-case) time
permissible for function execution (or maybe just data access).  I
don't see a good way of implementing this, but it would be useful for
true real-time systems.