[comp.os.research] Extremly Fast Filesystems

puder@zeno.informatik.uni-kl.de (Arno Puder) (07/31/90)

In <5465@darkstar.ucsc.edu> Craig Partridge writes:

> I'm curious.  Has anyone done research on building extremely
> fast file systems, capable of delivering 1 gigabit or more of data
> per second from disk into main memory?  I've heard rumors, but no
> concrete citations.

Tanenbaum (ast@cs.vu.nl) has developed a distributed system called
AMOEBA. Along with the OS-kernel there is a "bullet file server".
(BULLET because it is supposed to be pretty fast).

Tanenbaum's philosophy is that memory is getting cheaper and cheaper,
so why not load the complete file into memory? This makes the server
extremly efficient. Operations like OPEN or CLOSE on files are no
longer needed (i.e. the complete file is loaded for each update).

The benchmarks are quite impressive although I doubt that this
concept is useful (especially when thinking about transaction
systems in databases).

You can download Tanenbaum's original paper (along with a "complete"
description about AMOEBA) via anonymous ftp from midgard.ucsc.edu
in ftp/pub/amoeba.

Hope that helps,

Arno



------------------------------------------------------------------------
|  Arno Puder                    |   Q: Do you know Beethoven's Ninth? |
|  Rechenzentrum Kaiserslautern  |   A: No, I didn't know that he was  |
|--------------------------------|      married that often!            |
|  puder@rhrk.uni-kl.de          |                                     |
------------------------------------------------------------------------