[comp.os.research] Distributed OS Research - an overview

taylor%hpdstma@hplabs.HP.COM (Dave Taylor) (06/14/88)

Darrell,
	I thought it might be interesting to have you post a copy of 
this first pass at an overview of distributed OS research going on 
in academia ... it would be nice if you also emphasise when you post
it that it is a *draft* copy, so people are strongly encouraged to 
channel feedback to me...

Thanks!				Another UCSD-ite [80-84, CS],

						-- Dave Taylor

----------------------------------------------------------------------
----- attached: ~1300 lines of text, written in troff and to be  -----
----- processed through "tbl", then troff using "mm"...[that is, -----
----- a command like `tbl saved.article | troff -mm' is needed]  -----
----------------------------------------------------------------------

# This is a shell archive.  Remove anything before this line,
# then unpack it by saving it in a file and typing "sh file".
#
# Wrapped by taylor at hplabsz on Mon Jun 13 16:00:42 1988
#
# This archive contains:
#	distributed.os	
#
# Error checking via wc(1) will be performed.

echo x - distributed.os
cat >distributed.os <<'@EOF'
.SA 1
.nr Hy 1
.rs
.nr Pt 1
.PH ""
.PF "''Page \\\\nP''"
.ce 9999
.ft R
.ps 14
.ft B
.PH "'\s+2Distributed Operating Systems''Dave Taylor'"
.sp 3
.ft B
.ps 18
Distributed Operating Systems
.sp 4
.ps 14
An Overview of Current Research
.sp 8
.ft I
.ps 10
Dave Taylor \(em taylor@hplabs.hp.com
.sp 3
.ce
.ft I
External draft of: Monday, June 13th 1988
.ft B
.ps 12
.sp 3
Introduction
.sp
.ps 10
.ft R
In the last few decades, we've seen computers
move from large, monolithic machines that allowed
a single user complete access to the entire machine
to large machines that allowed multiple users
to have access to the machine simultaneously.  
Then, a decade or so ago, the next step was taken;
the machines became smaller and again returned
to single user computers, this time being called
`personal computers'.  The final step, so far,
is to tie these personal computers to a
central resource system for shared disks,
printers, CPU cycles, and so on; distributed
computing.
.sp
Since the late 1970's distributed computing, and 
more specifically distributed operating systems
research, has yielded an impressive amount of 
excellent work, moving to the forefront of computer 
science research areas in the university environment.
.sp
This paper, presents a succinct overview of the
major distributed operating systems research going on
at universities throughout the world, including the
current status of the project,
the operating environment, and a brief description of 
each system.
.sp
Appendix one is a bibliography of introductory papers not 
only for the projects discussed herein but also other related 
projects and papers of interest, and appendix two lists non-university
research in the distributed operating systems area.
.sp
If you have any questions on this paper, any
corrections, or any further information, please feel
free to contact me at the electronic mail address listed above.
.sp 2
.ft B
.ps 12
Format of this Paper
.ps 10
.ft R
.sp
The format used herein is;
.BL
.LI
Name of the Project
.LI
Where the Research is Going On
.LI
Primary Contact People
.LI
Current Project Status
.LI
Operating Environment
.LI
Brief Description of the Project
.LI
References
.LE
.sp
The references section contains references to
specific citations in the appendix, and those
in bold face were used as primary references for
the information in this paper.
.sp 2
.SK
.ce
.ps 14
.ft B
Amoeba
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Amoeba Project
.LI Where:
Vrije Universiteit, Amsterdam
.LI Contact:
Dr. S. J. Mullender
.br
Centre for Mathematics and Computer Science
.br
Kruislaan 413, 
1098 SJ Amsterdam, The Netherlands
.LI Status: 
Active
.LI Environment: 
Digital Equipment Corporation VAX 11/750's,
Motorola 68000-based workstations (unknown vendor)
and a Protean network ring.
.LI Description:
``Fifth generation computers must be fast, reliable
and flexible.  One way to achieve these goals is to
build them out of a small number of basic modules
that can be assembled together to realise machines
of various sizes.  The use of multiple modules can
not only make the machines fast, but also achieve a 
substantial amount of fault tolerance.''
.sp
Amoeba focuses on not only the use and management
of the large processor-set, but also on the
communications and \fIprotection\fR aspects
as well.
.sp
Overall, Amoeba is more of an `object oriented'
aproach to distributed operating systems, with
the designers rejecting the traditional
approach of a multilayer set of discrete 
processes (eg. the ISO seven layer model).
Nonetheless, Amoeba is based on message-passing
modules, a transaction approach to file
passing (versus the more common stream).
.sp
They spurn the ISO seven layer model in favor
of their own simplified, four layer model:
.BL
.LI
The Semantic Layer: for example, what commands do specific
types of processor modules understand?  This is the
only layer visible to users.
.LI
The Reliable Transport Layer: resonsible for requests and
replies between clients and servers \(em presumably this is
where the transaction protocol is used.
.LI
The Port Layer: service locations and transmission 
of datagrams (unreliable packet delivery) to servers.
Also enforces the protection mechanism.
.LI
The Physical Layer: deals with the electrical,
mechanical, and related aspects of the network
hardware.
.LE
.LI References:
\fB[Mullend86]\fR, [Mullend82]
.LE
.SK
.SK
.ce
.ps 14
.ft B
Andrew
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Andrew System
.LI Where: 
Carnegie Mellon University, Pittsburgh, Pennsylvania
.LI Contact: 
Dr. Alfred Spector 
.br
Information Technology Center 
.br
Carnegie-Mellon University
.br
Schenley Park
.br
Pittsburgh, PA 15213
.br
email:
.ft CW
spector@andrew.cmu.edu
.ft R
.sp
phone: (412) 268-6731
.LI Status: 
Active
.LI Environment: 
IBM RT-PCs and some Sun workstations
.LI Description:
The Andrew project is designed to be a prototype
computing and communications system for universities.
The main areas that are targetted by the development
team are:
.BL
.LI
Computer Aided Instruction
.LI
Creation and use of new tools
.LI
Communications
.LI
Information Access
.LE
.sp
As with a number of other distributed operating systems,
Andrew is a ``marriage between personal computers and
time-sharing.  It incorporates the flexibility and
visually rich user-machine interface made possible
by the former, with the ease of communication and
information-sharing characteric of the latter.''
.sp
Note: no support for diskless machines in the Andrew
system.  Reasons: less robust system (if network is
down so is machine); cost of complete server similar to
cost of individual disks; individuals unlikely to 
purchase machines only functional on local CMU network;
paging over network precludes privacy and security;
difficult to support a varied, heterogeneous
set of computers typically found at a university.
.sp
Andrew is based on Berkeley 4.2 BSD, partially
due to one of the premises of the project:
that a significant percentage of the user population
will be involved in ongoing software development. \*F
.FS
This is an interesting point of difference between academia
and industry \(em in the industry customers are more
interested in `solutions' than in something that they'll
need to learn how to program to use.
.FE
.sp
The Andrew system is based on a virtual
single file system called VICE (that is, a
file system with global naming and a single
hierarchical organization) and a workstation
based application support system called VIRTUE.
Typical Andrew workstations have VIRTUE running
on top of BSD 4.2, using the campus IBM Token Ring
Network to communication with the VICE file system.
(Andrew also supports smaller personal computers with
minimal functionality (yet still more sophisticated
than having the PC emulate a terminal and dial-up))
.sp
The VICE file space is actually broken into two
different parts; local and shared space.  The local
space is accessible to the user (typically on their
own machine's disk) but inaccessible to the rest of
the Andrew community, while the shared space can
actually exist anywhere on the network and be accessed
by anyone with the appropriate permissions.
.LI References:
\fB[Morris86]\fR, \fB[Nichols87]\fR, [Satyan85]
.LE
.SK
.ce
.ps 14
.ft B
Argus
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Argus Project
.LI Where:
Massachusetts Institute of Technology
.LI Contact:
Barbara Liskov,
.br
Massachusetts Institute of Technology
.br
Laboratory for Computer Science
.br
Cambridge, MA 02139
.br
email:
.ft CW
liskov@lcs.mit.edu
.ft R
.LI Status: 
Active
.LI Environment: 
unknown
.LI Description:
``Argus is a programming language and system developed to
support the implementation and execution of distributed
programs.  It provides mechanisms that help programmers
cope with the special problems that arise in distributed
programs, such as network partitions and crashes of
remote nodes.''
.LI References:
\fB[Liskov87]\fR
.LE
.SK
.ce
.ps 14
.ft B
Cambridge Distributed System \(em CDS
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Cambridge Distributed Computing System
.LI Where:
The University of Cambridge, England
.LI Contact:
Dr. Roger Needham or Dr. Andrew Herbert
.br
Computer Laboratory
.br
The University of Cambridge
.br
Cambridge, England
.LI Status: 
Presumed to have been completed
.LI Environment: 
Unknown, but probably based on Xerox machines and 
some sort of personal computing devices.
.LI Description:
The Cambridge Distributed System is of great interest
for a number of reasons, including its being
based on the Cambridge Digital Communications Ring,
a `slot ring' over twisted pair wires.
.sp
Another item of interest is that the system is
built of a virtual processor bank, and when a
user connects to the system (via a terminal
concentrator through a resource management
system) they are assigned a certain number 
of actual CPUs that remain theirs throughout
the entire session.
.sp
What's interesting about this approach is that it
neatly solves a couple of traditional problems
in distributed computing; namely process 
migration and utilisation of multiple processors
by a single task.  It also allows a network that
has \fIn\fR possible users to have significantly
less than \fIn\fR processors available, that
actual amount based on the peak demand need on
the system.
.LI References:
\fB[Needham82]\fR
.LE
.SK
.ce
.ps 14
.ft B
DASH
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The DASH Project
.LI Where:
The University of California at Berkeley
.LI Contact:
Dr. David Anderson or Dr. Dominico Ferrari, 
.br
Computer Science Division 
.br
Department of Electrical Engineering and Computer Sciences 
.br
University of California at Berkeley
.br
Berkeley, CA, 94720.
.br
email:
.ft CW
anderson@arpa.berkeley.edu
.ft R
or
.ft CW
ferarri@arpa.berkeley.edu
.ft R
.LI Status: 
Active
.LI Environment: 
Sun 3 workstations.
.LI Description:
DASH is designed to be a Very Large Distributed System (eg. one
that is numerically, geographically, and administratively
distributed, offering access to non-local resources and is
transparent (no syntactic changes for local versus remote
access, and relatively minor performance degradation))
.sp
``The following are some of the principles for VLDS design 
that we have arrived at . . . The DASH prototype incorporates
all of these principles:
.BL
.LI
separate the levels of network communication, execution
environment, execution abstraction, and kernel structure,
and provide an open framework where possible.
.LI
Use a hybrid naming system using a tree-structured symbolic
naming for global permanent entities, and capabilities to
communications streams for other entities.
.LI
When possible, put communication functions such as security
and inferface scheduling at a \fIhost-to-host\fR rather
than \fIprocess-to-process\fR level, and consolidate
these functions in a \fIsub-transport\fR layer.
.LI
Provide flexible support for stream-oriented communication.
.LI
Provide a service abstraction that allows for replication,
local caching and fault-tolerance, but does not directly
supply them.
.LI
Support real-time computation and communication at every
level.
.LE
.LI References:
\fB[Anders87]\fR, [Anders87/2]
.LE
.SK
.ce
.ps 14
.ft B
DEMOS/MP
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The DEMOS/MP Distributed Operating System
.LI Where:
The University of Wisconsin
.LI Contact:
Dr. Barton Miller
.br
Computer Sciences Department
.br
The University of Wisconsin
.br
1210 West Dayton Street
.br
Madison, Wisconsin  53706
.br
email:
.ft CW
miller@cs.wisc.edu
.ft R
.LI Status: 
Presumed Active
.LI Environment: 
A collection of Z-8000 processor-based workstations (unknown
vendor) on a LAN
.LI Description:
``The DEMOS operating system began on a Cray 1 computer
and has since moved through several computing environments.
Its current home is a collection of Z8000 processors
connected by a network.  This distributed version of DEMOS
is known as DEMOS/MP.  DEMOS has successfully moved
between substantially different architectures, while 
providing a consistent programming environment to the
user.''
.sp
The main goals of the DEMOS/MP project are:
.BL
.LI
Provide a clean message interface
.LI
Provide a well structured system that can be easily
modified (DEMOS/MP is the basis for a number of
different research projects at Wisconsin, including
distributed program measurement, reliable
computing and process migration)
.LI
To keep a high degree of network transparency while
experimenting to see what mechanisms could be easily
adapted to a distributed environment.
.LE
.sp
Programs are constructed of `computational elements'
(called \fIprocesses\fR) and `communications paths' that
join the elements (called \fIlinks\fR).  To make
DEMOS distributed, the approach was to leave the
computational elements intact and modify the links
to support distribution of the processing.
.sp
Processes are free to migrate without letting the
initiating client know; migrated processes leave
a `link process address' that is a pointer to the
new machine that the process is running on (which
can be a link process address, ad infinitum).
.sp
The DEMOS/MP system is based on a special purpose
lightweight protocol based on the original DEMOS
model of Inter Process Communication (IPC).  Due
to this basis, the system supports remote demand
paging (including having multiple machines
sharing a single page device), and also allows
diskless Z8000s to be connected to the network.
.sp
The DEMOS file system is broken up into four
separate file system processes (\fInot specified\fR).
.LI References:
\fB[Miller87]\fR, [Powell77]
.LE
.SK
.ce
.ps 14
.ft B
EDEN
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Eden distributed system
.LI Where:
The University of Washington
.LI Contact:
Dr. Andrew Black
.br
Department of Computer Science FR-35
.br
University of Washington
.br
Seattle, WA 98195
.br
email: 
.ft CW
black\s-1@\s+1cs.washington.edu
.ft R
.br
phone: (206) 543-9281
.LI Status: 
Presumed complete
.LI Environment: 
Digital Equipment Corporation VAX machines,
and Sun Workstations
.LI Description:
Eden represents a merging of three different approaches
to operating system design, namely:
.BL
.LI
Eden is a complete distributed operating system.
.LI
Eden is an object-oriented system (a descendent of
the Hydra system)
.LI
Eden is also a system based on a single Remote Procedure Call
(RPC) mechanism.
.LE
.sp
``It is important to observe that Eden is not a set of
facilities provided on top of an existing operating system
in an attempt to graft distribution onto some other
model of computation.  This is true despite the fact that 
the current prototype of Eden is implemented using the
facilities of Unix [Berkeley 4.2].  Eden itself provides
the user with a complete environment for program
development and execution.''
.sp
``Eden is an integrated system with a single uniform
system-wide [eg. global] namespace spanning multiple
machines.''  Within the Eden system, each process or
set of processes (called an \fIobject\fR) has the following 
attributes:
.BL
.LI
Objects are referenced by \fIcapabilities\fR
.LI
\fIInvocation\fR is how objects request and obtain
services from other objects
.LI
Objects are \fImobile\fR (the processes can migrate
freely)
.LI
Objects are \fIactive\fR at all times
.LI
Objects always have a \fIconcrete Edentype\fR which 
is in essence a description of the [finite] state
machine that represents the behaviour of that 
particular object.
.LI
All objects have a \fIdata part\fR, including long
and short term data.
.LI
Objects can \fIcheckpoint\fR autonomously (that is, 
they can choose to write their current state to the 
file system).
.LE
.sp
Eden was designed and coded in the Eden Programming Language,
a language based on Concurrent Euclid\*F.
.FS
An extension of the Pascal language that adds processes,
modules, and monitors.
.FE
This provides direct support for the low level 
abstractions of Eden (capabilities and invocation),
as well as supporting lightweight processing within
individual Eden processes.
.sp 2
This project is assumed to be completed.  It is the direct
precursor of the University of Washington HCS Heterogeneous
Computing System, described elsewhere in this paper.
.LI References:
\fB[Black85/2]\fR
.LE
.SK
.ce
.ps 14
.ft B
HCS
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Heterogeneous Computer Systems Project
.LI Where:
The University of Washington
.LI Contact:
Dr. David Notkin 
.br
Department of Computer Science, FR-35
.br
University of Washington
.br
Seattle WA, 98195
.br
email:
.ft CW
notkin@cs.washington.edu
.ft R
.br
phone: (206) 545-3798
.sp
.LI Status: 
Active
.LI Environment: 
15 different hardware/software combinations, including DEC
VAXen (including VAXstations IIs), Sun workstations, 
Xerox D-Machines, Tektronix 4404/4405 computers and
IBM RT-PCs.  Operating Systems include VMS, Unix, and Xerox OS.
.LI Description:
HCS is designed to alleviate the following common problems
in heterogeneous academic computing environments: 
\fIinconvenience\fR (eg. multiple, duplicate systems and
peripherals or isolation from the entire campus computing facility);
\fIexpense\fR (eg. the cost of extra machines, servers, 
peripherals, etc); \fIdiminished effectiveness\fR (too much
time spent porting between different campus machines and 
on different operating systems to be productive).
.sp
Consequently, HCS is designed for many system types and
different operating systems.  Based on TCP/IP, it has
\fIremote procedure calls\fR (RPC) and \fInaming\fR
(to create a global name space for the entire
heterogeneous environment) as the two key technologies.
.sp
The approach is to choose key network services and to
redo them for the networked environment.  The services
HCS support are: remote computation; mail; and filing.
.sp
To accomplish this they have four cornerstones:
.BL
.LI
RPC and naming give network access to the services
fundamental to cooperation and sharing
.LI
The system is designed to accomodate multiple standards
.LI
Tradeoff: \fInot\fR transparent access to existing software 
(that is, unlike NFS, RFA, etc where the program will run in
the distributed environment unchanged, HCS requires relinking
and possibly modification to the source).
.LI
Tradeoff: HCS is designed to support a system network
rather than a language-based network.
.LE
.sp
Designed to be modular, portable, and non-OS dependent.
.LI References:
\fB[Notkin88]\fR, [Black85]
.LE
.SK
.ce
.ps 14
.ft B
ISIS
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The ISIS System
.LI Where:
Cornell University
.LI Contact:
Dr. Kenneth Birman
.br
Department of Computer Science
.br
Cornell University
.br
Ithaca, New York 14853
.br
email:
.ft CW
ken\s-1@\s+1cs.cornell.edu
.ft P
.br
phone: (607) 255-9199
.LI Status: 
Active: new release announced June 7th, 1988
.LI Environment: 
Hewlett-Packard, Sun, Digital Equipment Corporation
and GOULD computers (specific models unknown).
.LI Description:
``The ISIS system transforms abstract type specifications
into fault-tolerant distributed implementations while
insulating users from the mechanisms used to achieve
fault-tolerance . . . the fault-tolerant implementation
is achieved by \fIconcurrently updating\fR replicated
data.  The system itself is based on a small set of
communication primitives.''
.sp
``The performance of distributed fault-tolerant services
running on this initial version of ISIS is found to be
nearly as good as that of non-distributed, fault-intolerant
ones.''
.sp
``No kernel changes are needed to support ISIS; you just 
roll it in and should be able to use it immediately.
The current implementation of ISIS performs well in
networks of up to about 100-200 sites.''
.sp
``You will find ISIS useful if you are interested in
developing relatively sophisticated distributed
programs under Unix (eventually, other systems too).
These include programs that distributed computations
over multiple processes, need fault-tolerance, coordinate
activities underway at several places in a network,
recover automatically from software and hardware
crashes, and/or dynamically reconfigure while
maintaining some sort of distributed correctness 
constraint at all times.  ISIS is also useful in
building certain types of distributed real time
systems.''
.sp
The ISIS group created a fault-tolerant, shadowed,
version of Sun's NFS, called RNFS, which has worst
case 25%-50% degredation of performance, but offers
transparent file replication, etc.
.LI References:
\fB[Birman85], [Birman88]\fR
.LE
.SK
.ce
.ps 14
.ft B
LOCUS
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The LOCUS Distributed Operating System
.LI Where:
The University of California at Los Angeles
.LI Contact:
Dr. Gerald Popek
.br
Department of Computer Science
.br
The University of California at Los Angeles
.br
Los Angeles, CA
.br
email:
.ft CW
popek@maui.cs.ucla.edu
.ft R
.br
phone: (213) 825-6507
.LI Status: 
Transfered to commercial venture: LOCUS Computing Corporation,
Santa Monica, California.
.LI Environment: 
International Business Machine PCs, 11/70's,
and Digital Equipment Corporation VAX 11/750's.
.LI Description:
``LOCUS is a distributed operating system which 
supports transparent access to data through a
network wide filesystem, permits automatic
replication of storage, suppports transparent
distributed process execution, supplies a number of
high reliability functions such as nested 
transactions, and is upward compatible with
Unix.  Partioned operation of subnets and their 
dynamic merge is also supported.''
.sp
(further description is deemed unnecessary due to the
status of the project)
.LI References:
\fB[Walker83]\fR
.LE
.SK
.ce
.ps 14
.ft B
Mach
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Mach Project
.LI Where:
Carnegie-Mellon University, Pittsburgh, Pennsylvania
.LI Contact:
Dr. Rick Rashid
.br
Computer Science Department
.br
Carnegie-Mellon University
.br
Pittsburgh, PA 15213-3890
.br
email:
.ft CW
rashid\s-1@\s+1spice.cs.cmu.edu
.ft P
.br
phone: (412) 268-2617
.LI Status: 
Active
.LI Environment: 
Mach runs on a considerable number of different
machines, including the Digital Equipment Corporation's
VAX series (including the 11/780, 8600 and microVAXen),
Sun series 3 workstations, the IBM RT-PC, and the Encore
MultiMax.
.LI Description:
``Mach is a multiprocessor operating system kernel ...
In addition to binary compatability with Berkeley 4.3
Unix, Mach also provides a number of new facilities
not available in 4.3:
.BL
.LI
Support for tightly coupled and loosely coupled
general purpose multiprocessors.
.LI
An internal adb-like kernel debugger.
.LI
Support for transparent remote file access between
autonomous systems.
.LI
Support for large, sparse virtual address spaces,
copy-on-write virtual copy operations, and 
memory mapped files.
.LI
Provisions for user-provided memory objects and
pagers.
.LI
Multiple threads of control within a single
address space.
.LI
A capability-based interprocess communication 
facility integrated with virtual memory 
management to allow transfer of large amounts of
data (up to the size of a process address space)
via copy-on-write techniques.
.LI
Transparent network interprocess communications
with preservation of capability protection 
across network boundaries.''
.LE
.sp
More than that, however, Dr. Rashid's vision
is to reorganize Mach to free it from any upward
dependencies on the Berkeley 4.3 Unix kernel (which
it conceptually fits under) and have available a
portable `microkernel' that can be fit under any
operating system to offer easy RPC and IPC
access, as well as a shared file system, in an
arbitrary, heterogeneous environment.
.LI References:
\fB[Rashid87]\fR
.LE
.SK
.ce
.ps 14
.ft B
The Newcastle Connection
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Newcastle Connection Protocol
.LI Where:
The University of Newcastle upon Tyne, England
.LI Contact:
Dr. C.R. Snow or Dr. H. Whitfield
.br
Computing Laboratory
.br
University of Newcastle upon Tyne
.br
Claremont Road
.br
Newcastle upon Tyne NE1 7RU
.br
England
.LI Status: 
Presumed active.
.LI Environment: 
unknown 
.LI Description:
.sp
``... [it] demonstrates that the Newcastle Connection technique
can be used to connect together operating systems with differing
structures and philosophies. ''
.sp
``In the field of distributed computing, an interesting recent
development has been the Unix United system, implemented
using the Newcastle Connection.  This mechanism . . . connects
together a set of Unix systems to forma  coherent distributed
system.''
.LI References:
partial: \fB[Snow86]\fR
.LE
.SK
.ce
.ps 14
.ft B
SIGMA
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The SIGMA Project
.LI Where:
Japan: The Japanese Information-Technology Promotion Agency
under the Ministry of International Trade and Industry.
.LI Contact:
unknown.
.LI Status: 
Active
.LI Environment: 
unknown \(em part of the SIGMA project is to specify a
future working environment for distributed workstations,
a copy of which can be found as appendix three.
.LI Description:
The SIGMA Project is tasked with the role of consolidating
Japan's software development resources.  The key points
noted are:
.BL
.LI
The development of a central database for the storage,
cataloging, advertising, and retrieval of software 
tools, and
.LI
The structuring of a network capable of providing
wide access to the database (including connections
by companies, universities, and research institutes).
.LE
.sp
SIGMA is, so far, based very heavily on existing
standards, with a fundamental basis of System V
from AT&T because the SIGMA team found ``System V
more reliable and safer'' than the Berkeley BSD
distributions.
.sp
The team seems to want to avoid choosing a 
technology until it is very clearly the accepted
standard for the industry.  For example the specification
does not indicate which network file system they are
interested in supporting; either NFS from Sun or RFS from
AT&T V.3.
.sp
For further insights, consider the SIGMA workstation feature
list in the appendix: note especially the specification of
a number of windows to be supported, but no indication of
a specific window system having been choosen.
.LI References:
\fB[Schrie87]\fR
.LE
.SK
.ce
.ps 14
.ft B
Sprite
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The Sprite Project
.LI Where:
The University of California at Berkeley
.LI Contact:
Dr. John Ousterhout 
.br
Computer Science Division
.br
Department of Electrical Engineering and Computer Sciences
.br
University of California at Berkeley
.br
Berkeley, CA, 94720.
.br
email:
.ft CW
ouster@arpa.berkeley.edu
.ft R
.br
phone: (415) 642-0865
.sp
Alternatively, the group is accessible via ARPANET at the
electronic mail address:
.ft CW
spriters\s-1@\s+1arpa.berkeley.edu
.ft R
.LI Status: 
Active
.LI Environment: 
Sun 2 and Sun 3 series workstations
.LI Description:
Sprite is a distributed operating system that is
optimized for a small, fast local LAN, and will
offer, via the file system (eg. file based Inter-Process
Communications (IPC)), the resources and transparent
peripheral access advantages of a mainframe while
retaining the performance advantages of an 
individual workstation.
.sp
There were three key issues for the designers; the network,
physical memory, and multiprocessors.
.sp
The main goal of the system is to support the 
Berkeley SPUR multiprocessor workstation in a
distributed environment, with the target software
environment being LISP.
.LI References:
\fB[Ouster88]\fR, [Ouster87]
.LE
.SK
.ce
.ps 14
.ft B
V
.ft R
.sp 2
.ps 10
.VL 13
.LI Name:
The V Distributed System
.LI Where:
Stanford University, Stanford California.
.LI Contact:
Dr. David Cheriton
.br
Computer Science, Building 460, Room 422
.br
Stanford, CA, 94305-6110
.br
email:
.ft CW
cheriton@cs.stanford.edu
.ft R
.br
phone: (415) 723-1054
.LI Status: 
Active
.LI Environment: 
DEC microVAX II workstations, Sun 3/75's, and access to 
the DEC Firefly multiprocessor workstation prototype.
.LI Description:
V is designed to be a testbed for distributed
systems research \(em built out of four logical
parts: the distributed Unix kernel; the service
modules; the runtime support libraries; and
the added user-level commands.  
Due to the modular design of the system, porting 
particular applications to work within V is
often as easy as simply relinking the binary
with the new runtime libaries.
.sp
Basis of the V design is the hypothesis: ``Operating
Systems developed that could manage a \fIcluster\fR
of these workstations and server machines, providing
the resources and information sharing facilities
of a conventional single-machine system but running
on this new, more powerful and more
economical hardware base.''  The tenets include:
.BL
.LI
High performance communication is the \fImost\fR critical
facility for distributed systems.
.LI
protocols, not software, define the system,
.LI
Design distributed opearting systems as \fIsoftware
backplanes\fR \(em small operating system kernel
implements just the basic protocols and services,
with the rest in process level/user space.
.LE
.sp
V Uses high-speed Inter-Process Communications (IPC) as a base.
.sp
Tektronix is currently using V as a basis for their
distributed instrumentation.
.LI References:
\fB[Cheriton88]\fR, [Cheriton87], [Lantz85]
.LE
.SK
.ce 9999
.ps 14
.ft B
Appendix One : References
.ft R
.ce 0
.ps 10
.sp
.VL 13
.LI "\fB[Anders87]\fR"
Anderson, David, et. al., \fIThe DASH Project: Design Issues 
for Very Large Distributed Systems\fR
login - The Newsletter of the Usenix Association, Vol 12, No 2, 
March/April 1987.  (\fBDASH\fR)
.LI "[Anders87/2]"
Anderson, David, et. al., \fIThe DASH Project: Issues
in the Design of Very Large Distributed Systems\fR,
UCB/Computer Science Report No. 87/338, January 1987.
(\fBDASH\fR)
.LI "\fB[Birman85]\fR"
Birman, Kenneth, \fIReplication and Fault Tolerance in the
ISIS System\fR, Proceedings of the Tenth ACM Symposium
on Operating Systems Principles, December, 1985, pp 79-86.
(\fBISIS\fR)
.LI "\fB[Birman88]\fR"
Birman, Kenneth, \fIAvailability of ISIS Distributed Programming
Environment\fR, netnews posting to `comp.os.research' dated
June 7th, 1988 from
.ft CW
ken\s-1@\s+1gvax.cs.cornell.edu
.ft P
(\fBISIS\fR)
.LI "[Black85]"
Black, Andrew, et. al., \fIAn Approach to
Accomodating Heterogeneity\fR, Technical Report
85-10-04, University of Washington, Seattle, WA.
(\fBHSC\fR)
.LI "\fB[Black85/2]\fR"
Black, Andrew, \fISupporting Distributed Applications:
Experience with Eden\fR, Proceedings of the Tenth
ACM Sypmosium on Operating Systems Principles, December
1985.  (\fBEden\fR)
.LI "[Cheriton87]" 
Cheriton, David, \fIUIO: A Uniform I/O Interface
for Distributed Systems\fR, ACM Transactions on
Computing Systems, Vol 5, No 1, February 1987.  (\fBV\fR)
.LI "\fB[Cheriton88]\fR"
Cheriton, David, \fIThe V Distributed System\fR, 
Communications of the ACM, Vol 31, No 3, March 1988,
pp 314-333.  (\fBV\fR)
.LI "[Lantz85]"
Lantz, Keith, et. al., \fIAn Empirical Study
of Distributed Application Performance\fR, IEEE
Transactions on Software Engineering, Vol 11, No 10, 
October 1985, pp 1162-1174.  (\fBV\fR)
.LI "\fB[Liskov87]\fR"
Liskov, Barbara, \fIDistributed Programming In Argus\fR,
Programming Methodology Group Memo 58 (to be published
in the Communications of the ACM), October 1987.
(\fBArgus\fR)
.LI "\fB[Miller87]\fR" 
Miller, Barton, et. al., \fIDEMOS/MP: The Development
of a Distributed Operating System\fR, Software Practice
and Experience, Vol 17, No 4, April 1987, pp 277-290.
(\fBDEMOS/MP\fR)
.LI "\fB[Morris86]\fR"
Morris, James H., et. al., \fIAndrew: A Distributed
Personal Computing Environment\fR, Communications of
the ACM, Vol 29, No 3, March 1986, pp 184-201.
(\fBAndrew\fR)
.LI "[Mullend82]"
Mullender, S. J., et. al., \fIProtection and Resource
Control in Distributed Operating Systems\fR, (to appear
in \fIComputer Networks\fR), Vrije Universiteit,
Amsterdam, August 1982. (\fBAmoeba\fR)
.LI "\fB[Mullend86]\fR"
Mullender, S. J., et. al., \fIThe Design of a 
Capability-Based Distributed Operating System\fR,
The Computer Journal, Vol 29, No 4, 1986, pp 289-299.
(\fBAmoeba\fR)
.LI "\fB[Needham82]\fR"
Needham, Roger, and Herbert, Andrew, \fIThe Cambridge Distributed 
Computing System\fR, Addison-Wesley, 1982.  (\fBCDS\fR)
.LI "\fB[Nichols87]\fR"
Nichols, David A., \fIUsing Idle Workstations in a
Shared Computing Environment\fR, Proceedings of the
Eleventh ACM Syposium on Operating Systems Principles,
November 8-11, 1987, pp 5-12.  (\fBAndrew\fR)
.LI "\fB[Notkin88]\fR"
Notkin, David, et. al., \fIInterconnecting Heterogeneous
Computer Systems\fR, Communications of the ACM, Vol
31, No 3, March 1988, pp 258-273. (\fBHCS\fR)
.LI "[Ouster87]"
Ousterhout, John, el. al., \fIAn Overview of 
the Sprite Project\fR, login - The Newsletter of the
Usenix Association, Vol 12, No 1, January/February
1987, pp 13-17.  (\fBSprite\fR)
.LI "\fB[Ouster88]\fR"
Ousterhout, John, et. al., \fIThe Sprite
Network Operating System\fR, IEEE Computer,
February 1988, pp 23-36.  (\fBSprite\fR)
.LI "[Powell77]"
Powell, M. L., \fIThe DEMOS File System\fR, Proceedings
of the Sixth ACM Symposium of Operating Systems
Principles, November 1977, pp 33-42.  (\fBDEMOS/MP\fR)
.LI "\fB[Rashid87]\fR"
Rashid, Rick, \fIFrom RIG to Accent to MACH: The 
Evolution of a Network Operating System\fR,
CMU Computer Science Department Research Report,
August 1987.  (\fBMach\fR)
.LI "[Satyan85]"
Satyanarayanan, M, et. al., \fIThe ITC Distributed
File System: Principles and Design\fR, Proceedings of
the Tenth ACM Symposium on Operating Systems Principles,
December 1981.  (\fBAndrew\fR)
.LI "\fB[Schrie87]\fR"
Schriebman, Jeff, \fIBlueprints for the Future\fR,
Unix Review, February 1987, pp 37-43.  (\fBSIGMA\fR)
.LI "\fB[Snow86]\fR"
Snow, C. R., et. al., \fIAn Experiment with the Newcastle
Connection Protocol\fR, Software Practice and Experience,
Vol 16, No 11, November 1986.  (\fBThe Newcastle Connection\fR)
.LI "\fB[Walker83]\fR"
Walker, Bruce, et. al., \fIThe LOCUS Distributed
Operating System\fR, Proceedings of the Ninth
ACM Symposium on Operating System Principles,
October 1983, pp 49-70.  (\fBLOCUS\fR)
.LE
.SK
.ce 999
.ps 14
.ft B
Appendix Two : Other Research
.ft R
.ce 0
.ps 10
.sp
There are a significant number of other research projects
going on in the area of distributed operating systems,
however they are at specific research institutes or
corporations rather than universities.
.sp
Among the more interesting projects are:
.sp
.VL 15
.LI "DUNIX"
This is a multi-level distributed Unix kernel being
done at Bell Communications Research in New Jersey.
.br
See: Litman, Ami, \fIThe DUNIX Distributed Operating System\fR,
ACM Operating Systems Review, Vol 22, No 1, January 1988, pg 42.
.sp
.LI "Apollo Domains"
This distributed system is proprietary to Apollo 
Computer, and is the basis of their successful
distributed workstation package.
.sp
.LI "The R* System"
This research is being carried out at IBM's Thomas J.
Watson Research Center in New York.
.sp
.LI "Grapevine"
This is one of the many areas of distributed operating
systems research done at XEROX Palo Alto Research
Center (PARC), though most of the work seems to have
reached a state of stasis and is no longer being
pursued.
.sp
.LI "VAXClusters"
This distributed operating system is built within
Digital Equipment Corporations' VMS system, as a
proprietary protocol for clustering machines in
the VAX architecture family.
.sp
.LI "DUX"
This proprietary distributed operating system is
from Hewlett-Packard, Fort Collins, and is also
the basis for the successful diskless implementation
available on the 9000/300 series of machines.
.sp
.LI "Meglos"
This system from AT&T Bell Laboratories in Holmdel, New Jersey,
provides a user-level, message-based
programming environment for interconnected processors.
.br
See: Gaglianello, Robert, et. al., \fICommunications In Meglos\fR,
Software Practice and Experience, Vol 16, No 10, October 1986.
.LE
.SK
.ce 999
.ps 14
.ft B
Appendix Three 

The SIGMA Workstation of the 1990's
.ft R
.ce 0
.ps 10
.sp
.TS
l l.
   Price:	    $18,980

   CPU: 	    32 bit + floating point processor

   Performance:	    1 MIPS or greater

   Memory:	    4 Megabytes of RAM or greater

   Disk:	    80 Megabytes or more

   Streamer MT:	    40 Megabytes or more

   Floppy:	    5" 1.6 Megabyte (\fIformat not specified\fR)

   Serial:	    RS-232C  (4 or more)

   Display:	    1024x768 color or black&white bitmapped
	    supporting 4 or more windows

   Pointing Device:	    2 or more button mouse
.TE
.FS " "
Source: Unix Review \(em see [Schrie87].
.FE
@EOF
if test "`wc -lwc <distributed.os`" != '   1359   5439  36433'
then
	echo ERROR: wc results of distributed.os are `wc -lwc <distributed.os` should be    1359   5439  36433
fi

chmod 666 distributed.os

exit 0

bobbyd@upvax.UUCP (Oswald Brews) (06/19/88)

In article <5066@sdcsvax.UCSD.EDU> taylor%hpdstma@hplabs.HP.COM (Dave Taylor)
provides:

[lots of great stuff about current distributed OS projects...]

Okay, so Andrew and Mach are there.  But won't ANYBODY acknowledge
Spice/Accent??!?!?  If it's completely dead and buried, then can I get a
copy for free?  Every note I've sent to CMU either never arrives, or the
lengthy reply doesn't get back to me ... 8-)  Somebody at CMU, please, I'm
begging now, send me SOMETHING about Accent.  Or box up those remaining
PERQs, dead or alive, and send them to me!  Even hate mail and flames would
let me know that this plea did not fall on deaf ears...

Holding my breath,

Christopher Lamb	bobbyd@upvax	Certified, Incorrigible PERQ Fanatic.
2719 NE 8th Ave.			Send your tax-deductible donations to
Portland, OR 97212	503/288-3800	this address. No live poultry please.