[comp.os.research] OS contact list

darrell@cs.ucsd.edu (Darrell Long) (01/25/88)

[ Once again, if you are an active OS researcher, I would like to hear from ]
[ you.  I am compiling a list of OS research projects and contacts.  --DL   ]

Kurt Zeilenga	(zeilenga@hc.dspo.gov)				Hyper UNIX

Parallel Processing Research Group
Deptartment  of Electrical and Computer Engineering
University of New Mexico
Albuquerque, NM 87131

Maurice Herlihy	(HERLIHY@C.CS.CMU.EDU)				Avalon

Carnegie-Mellon University
Computer Science Department
Schenley Park
Pittsburgh, PA 15213

darrell@cs.ucsd.edu (Darrell Long) (02/17/88)

[ This is the latest and greatest version of the OS contact list.  There are ]
[ some notable changes and well as additions. There are also some people who ]
[ are very obviously missing (you know who you are, and I won't add you till ]
[ you send your permission).  --DL                                           ]

Kurt Zeilenga		(zeilenga@hc.dspo.gov)			Hyper UNIX

Parallel Processing Research Group
Department of Electrical and Computer Engineering
University of New Mexico
Albuquerque, NM 87131

Availability: internal only.

Fred Douglis		(douglis@ginger.Berkeley.EDU)		Sprite
			(spriters@ginger.Berkeley.EDU)

Computer Science Division
571 Evans Hall
University of California
Berkeley, CA 94720

Availability: not ready to estimate.

Ken Birman		(ken@gvax.cs.cornell.edu)		ISIS

Department of Computer Science
4105 Upson Hall
Cornell University
Ithaca, New York 14853

Availability: summer 1988.

Tony Mason		(mason@pescadero.stanford.edu)		V

Distributed Systems Group
Stanford University

Availability:  A version is available now (version 6.0) and a new release is
               tentatively scheduled for this summer (version 7.0.)

E. Douglas Jensen	(edj@cs.cmu.edu)			Alpha
			(ksr!edj@harvard.harvard.edu)

Kendall Square Research
Cambridge, MA
(617) 494-1146

Bob Bruce		(rab@mimsy.umd.edu)			Parallel OS

University of Maryland
Laboratory for Parallel Computation
Department of Computer Science
College Park, MD  20742

Calton Pu		(calton@cs.columbia.edu)		Synthesis
Department of Computer Science
Columbia University
New York, NY 10027
(212) 280-8110

Availability: internal only -- so far.

Andy Tanenbaum		(ast@cs.vu.nl)				Amoeba

Department of Mathematics and Computer Science
Vrije Universiteit
Postbus 7161
1007 MC Amsterdam Holland

Jan Edler		(edler@nyu.edu)				Ultra

New York University
251 Mercer Street
New York, NY 10012
(212) 998-3353

Richard D. Schlichting	(rick@arizona.edu)			Saguaro

Department of Computer Science
The University of Arizona
Tucson, AZ  85721

Michael L. Scott	(scott@cs.rochester.edu)		Psyche
(716) 275-7745
Thomas J. LeBlanc	(leblanc@cs.rochester.edu)
(716) 275-5426

Department of Computer Science
University of Rochester
Rochester, NY  14627

John Nicol		(cosmos@comp.lancs.ac.uk)		Cosmos

The COSMOS Research Group
Department of Computing
University of Lancaster
Bailrigg
Lancaster, LA1 4YR,
UNITED KINGDOM
+44 (0) 524 65201 Ext 4145, 4146


Ami Litman		(ami@bellcore.com)			DUNIX

Bell Communications Research
435 South street
Morristown, N.J. 07960
(201) 829-4377

Availability: DUNIX on VAX's is in use as of September, 1986.
	      Porting to Suns and MicroVAX's is currently under way.
	      Bellcore distributes DUNIX free to universities and 
	      research institutions.

Greg Burns		(gdburns@tcgould.tn.cornell.edu)	Trillium

Cornell Theory Center
265 Olin Hall
Ithaca, NY 14853-5201

Kevin Murray		(murray@minster.york.ac.uk)		Wisdom

Department of Computer Science
University of York
York, UK, YO1 5DD

Rick Rashid		(rashid@cs.cmu.edu)			MACH

Department of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
(412) 268-2617

Project:	Mach
Availability:	Since January 1987, current release number: 2
Machine types:	VAX, SUN 3, IBM RT
Cost:		None (no distribution fee, tape fee, license fee or royalty)
Licenses:	Berkeley 4.3bsd (VAX & RT), SunOS binary (SUN)
Contact:	Mach Project
		c/o Debbie Lynn
		Department of Computer Science
		Carnegie Mellon University
		Pittsburgh, PA 15213
		(412) 268-7665
		mach@cs.cmu.edu

Pamela Reyner Scott	(REYNER@CS.CMU.EDU)			Avalon

Carnegie Mellon University
Computer Science Department
Schenley Park
Pittsburgh, PA 15213

Availability: about one year from now.

darrell@cs.ucsd.edu (Darrell Long) (03/09/88)

[ This is the last version of the OS contact list.   I haven't gotten ]
[ any submissions for a couple of weeks so I think that it is stable. ]
[ My thanks to everyone who contributed!  --DL                        ]

Kurt Zeilenga		(zeilenga@hc.dspo.gov)			Hyper UNIX

Parallel Processing Research Group
Department of Electrical and Computer Engineering
University of New Mexico
Albuquerque, NM 87131

Availability: internal only.

Fred Douglis		(douglis@ginger.Berkeley.EDU)		Sprite
			(spriters@ginger.Berkeley.EDU)

Computer Science Division
571 Evans Hall
University of California
Berkeley, CA 94720

Availability: not ready to estimate.

Ken Birman		(ken@gvax.cs.cornell.edu)		ISIS

Department of Computer Science
4105 Upson Hall
Cornell University
Ithaca, New York 14853

Availability: summer 1988.

Tony Mason		(mason@pescadero.stanford.edu)		V

Distributed Systems Group
Stanford University

Availability:  A version is available now (version 6.0) and a new release is
               tentatively scheduled for this summer (version 7.0.)

E. Douglas Jensen	(edj@cs.cmu.edu)			Alpha
			(ksr!edj@harvard.harvard.edu)

Kendall Square Research
Cambridge, MA
(617) 494-1146

Bob Bruce		(rab@mimsy.umd.edu)			Parallel OS

University of Maryland
Laboratory for Parallel Computation
Department of Computer Science
College Park, MD  20742

Calton Pu		(calton@cs.columbia.edu)		Synthesis
Department of Computer Science
Columbia University
New York, NY 10027
(212) 280-8110

Availability: internal only -- so far.

Andy Tanenbaum		(ast@cs.vu.nl)				Amoeba

Department of Mathematics and Computer Science
Vrije Universiteit
Postbus 7161
1007 MC Amsterdam Holland

Jan Edler		(edler@nyu.edu)				Ultra

New York University
251 Mercer Street
New York, NY 10012
(212) 998-3353

Richard D. Schlichting	(rick@arizona.edu)			Saguaro

Department of Computer Science
The University of Arizona
Tucson, AZ  85721

Michael L. Scott	(scott@cs.rochester.edu)		Psyche
(716) 275-7745
Thomas J. LeBlanc	(leblanc@cs.rochester.edu)
(716) 275-5426

Department of Computer Science
University of Rochester
Rochester, NY  14627

John Nicol		(cosmos@comp.lancs.ac.uk)		Cosmos

The COSMOS Research Group
Department of Computing
University of Lancaster
Bailrigg
Lancaster, LA1 4YR,
UNITED KINGDOM
+44 (0) 524 65201 Ext 4145, 4146


Ami Litman		(ami@bellcore.com)			DUNIX

Bell Communications Research
435 South street
Morristown, N.J. 07960
(201) 829-4377

Availability: DUNIX on VAX's is in use as of September, 1986.
	      Porting to Suns and MicroVAX's is currently under way.
	      Bellcore distributes DUNIX free to universities and 
	      research institutions.

Greg Burns		(gdburns@tcgould.tn.cornell.edu)	Trillium

Cornell Theory Center
265 Olin Hall
Ithaca, NY 14853-5201

Kevin Murray		(murray@minster.york.ac.uk)		Wisdom

Department of Computer Science
University of York
York, UK, YO1 5DD

Rick Rashid		(rashid@cs.cmu.edu)			MACH

Department of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
(412) 268-2617

Project:	Mach
Availability:	Since January 1987, current release number: 2
Machine types:	VAX, SUN 3, IBM RT
Cost:		None (no distribution fee, tape fee, license fee or royalty)
Licenses:	Berkeley 4.3bsd (VAX & RT), SunOS binary (SUN)
Contact:	Mach Project
		c/o Debbie Lynn
		Department of Computer Science
		Carnegie Mellon University
		Pittsburgh, PA 15213
		(412) 268-7665
		mach@cs.cmu.edu

Pamela Reyner Scott	(REYNER@CS.CMU.EDU)			Avalon

Carnegie Mellon University
Computer Science Department
Schenley Park
Pittsburgh, PA 15213

Availability: about one year from now.

Partha Dasgupta		(partha@gatech.edu)			Clouds

School of Infomation and Computer Science
Georgia Institute of Technology
Atlanta GA 30332
(404) 894-2572

Availability: end of 1988, internally.

Dr. M. Sloman		(mss@doc.ic.ac.uk)			Conic
Dr. J. Kramer
Dr. J. Magee

Department of Computing
Imperial College
180 Queensgate
London, UK
SW7 2BZ     

(+44-1) 589 5111 extention (5041, 5058, 5040, 5043)     

Raphael A. Finkel	(raphael@ms.uky.edu)			Yackos, DIG

POT 959
Computer Science Department
University of Kentucky
Lexington, KY  40506-0027

James C. Berets		(jberets@bbn.com)			Cronus

BBN Laboratories
10 Moulton Street
Cambridge, MA 02238
(617) 873-2593

Graham Parrington 	(graham@cheviot.newcastle.ac.uk)	Arjuna
Computing Laboratory
University of Newcastle upon Tyne
Claremont Tower
Claremont Road
Newcastle upon Tyne
NE1 7RU
ENGLAND


Darrell Long
Department of Computer Science & Engineering, UC San Diego, La Jolla CA 92093
ARPA: Darrell@Beowulf.UCSD.EDU  UUCP: darrell@sdcsvax.uucp
Operating Systems submissions to: comp-os-research@sdcsvax.uucp

guy@cs.ucla.edu (Richard Guy) (09/21/89)

| Date: Sat, 19 Aug 89 10:40:07 EDT
| From: siegel@cs.cornell.edu (Alexander Siegel)
| Subject: Deceit File System project


Title:    Deceit Distributed File System

Contacts: Alex Siegel    (siegel@cs.cornell.edu)
          Ken Birman     (ken@cs.cornell.edu)
          Kieth Marzullo (marzullo@cs.cornell.edu)


Deceit is a distributed file system which is being developed at
Cornell, and it focuses on flexible file semantics in relation to
efficiency, scalability, and reliability.  Deceit servers are
functionally interchangable and collectively provide the illusion of a
single, large server machine to any clients which mount Deceit.
Stable copies of each file are stored on a subset of the file servers.
The user is able to set parameters on a file to achieve different
levels of availability, performance, and one-copy behavior.  Deceit
behaves as a plain NFS server and can be mounted by any NFS client
without modifying any client software.  The current Deceit prototype
uses the ISIS Distributed Programming Environment for all
communication and processor group management.

Availability: early prototypes available
================================================================================
| Date: Mon, 21 Aug 89 15:47:09 EDT
| From: heddaya@CS.BU.EDU (Abdelsalam Heddaya)
| Subject: BURDS: BU Replicated Data System

Here's an entry that refers to my current project to experiment with
replication methods for typed data.  This project is related to
operating systems the same way file systems are.  If this passes your
test for relevance, I'd appreciate it if you include it in the list
you're compiling.  Thanks.

Abdelsalam A. Heddaya (heddaya@cs.bu.edu)      BURDS: BU Replicated Data System

Abdelsalam A. Heddaya
Computer Science Department
Boston University
111 Cummington St.
Boston, MA 02215

(617) 353-8922

Availability: in development.
================================================================================
| Date: Tue, 22 Aug 89 14:49:37 CDT
| From: roy@roy.cs.uiuc.edu (Roy Campbell)
| Subject: Choices

Roy Campbell		(roy@cs.uiuc.edu)			Choices
(217) 333-0215
Vince Russo		(russo@cs.uiuc.edu)
(217) 333-7937

Department of Computer Science
University of Illinois
1304 W. Springfield Av.
Urbana, IL 61801


Availability:	Stable.7.25.1989, release number: 0.0.2.
Machine types:	Encore Multimax NS 32332. (Future releases: MC68030, Intel 386).
Cost:		None (no distribution fee, license fee or royalty).
License:	From University of Illinois: Choices Software Distribution.
Contact:	Tapestry Project
		c/o Anda Harney
		Department of Computer Science
		1304 W. Springfield Av.
		University of Illinois
		Urbana, IL 61801
		(217) 333-3328
		harney@cs.uiuc.edu
Comments:	Work in progress. The system, as distributed, is incomplete.
================================================================================
| Date: Wed, 23 Aug 89 16:11:35 -0400
| From: ogud@cs.UMD.EDU (Olafur Gudmundsson)
| Subject: Maruti Operating system procject

MARUTI, A Distributed, Fault-Tolerant, Hard Real-Time Operating System

Contacts:
Ashok K. Agrawala			Olafur Gudmundsson
Department of Computer Science		Department of Computer Science
University of Maryland			University of Maryland
College Park, MD 20742			College Park, MD 20742
agrawala@brillig.umd.edu		ogud@mimsy.umd.edu
(301) 454 4968				(301)-454-6497

Keywords - Hard Real-Time, Distributed, Object Oriented, Fault Tolerant,
	Operating system.


In order to address the computational needs of the realtime applications of
tomorrow it is essential that the operating system support the fault-
tolerant, distributed operation while assuring the meeting of the hard
real-time requirements of processing.  MARUTI is an operating system
environment which includes the kernel and a set of support tools aimed at
addressing the issues of design, implementation and maintenance of real-time
applications.

MARUTI operating system has been designed as an object oriented system with
suitable extensions to the standard concepts of objects to provide efficient
real-time implementations. The scheduling approach includes a verification step
which assures the meeting of the deadline once the processing request has been
accepted by the system. The resource allocation addresses the application
specific fault tolerance needs and guarantees the hard real-time performance 
within the constraints of the requested fault tolerance levels.
This system uses a uniform mechanism for fault monitoring and reporting as
well as recovery and permits the use of a variety of techniques for handling
faults. It also provides a security model for the applications.

The approach taken in this effort has been to implement the complete system
and use it for developing applications as well as studying the design
and implementation issues for hard real-time systems.  

Availability: Prototype will be distributable next year
================================================================================
| Date: Thu, 24 Aug 89 09:46:08 -0200
| From: deswarte@laas.laas.fr (Yves Deswarte)
| Subject: Saturne


SATURNE is a research project aiming at exploiting as much as possible
the distribution of workstations on LANs in order to increase
fault-tolerance, as well for accidental faults (e.g. harware faults) as
for deliberate faults (e.g. intrusions).
Two main techniques are currently developed in the scope of the Saturne
project :
- To tolerate accidental faults, the SATURATION technique consists in
using idle resources in order to increase the redundancy of active tasks
rather than decrease their response time.
- To tolerate deliberate faults, the Fragmentation-and-Scattering
technique consists in cutting information in small fragments such that
isolated fragments cannot deliver significant information, and in
disseminating those fragments throughout the distributed system. This
technique has been first applied to the design of a secure file archiving
service, and is currently developped for implementation of an
"intrusion-tolerant" network security service.

Contact : Yves Deswarte
          LAAS-CNRS and INRIA
          7, avenue du Colonel Roche
          31077 TOULOUSE
          FRANCE
     tel.: +33/ 61 33 62 88
   E-mail: deswarte@laas.laas.fr
       or: y_deswarte@eurokom.ie
================================================================================
| From: Mark Little <M.C.Little%newcastle.ac.uk@NSFNET-RELAY.AC.UK>
| Date: Thu, 24 Aug 89 14:00:48 +0100
| Subject: Arjuna

Project Name: Arjuna

Project Members (August 1989):
	Santosh K. Shrivastava (principle investigator and coordinator)
	Graham D. Parrington
	Stuart M. Wheater
	Fariba Hedayati
	Mark C. Little
	Shangjie Jin

Contacts:
    Stuart M. Wheater	      Mark C. Little
    Stuart.Wheater@uk.ac.newcastle    M.C.Little@uk.ac.newcastle
    
    Computing Laboratory,
    University of Newcastle upon Tyne,
    Newcastle upon Tyne,
    NE1 7RU,
    England.
    

Environment: Sun-3 workstations

Description:

  Arjuna is an object-oriented programming system that provides a set of tools
for the construction of fault-tolerant distributed applications. A prototype
version written in C++ has been designed and implemented to run on a collection
of Unix workstations connected by a local area network.Arjuna provides nested
atomic actions (nested atomic transactions) for structuring application
programs. Atomic actions operate on objects, which are instances of abstract
data types (C++ classes), by making use of remote procedure calls (RPCs).

  The design and implementation goal of Arjuna was to provide a state of the
art programming system for constructing fault-tolerant distributed applications.
In meeting this goal, three system properties were considered highly important:

(i)	Integration of mechanisms: a fault-tolerant distributed system requires
	a variety of system functions for naming, locating and invoking
	operations upon local and remote objects and also for concurrency
	control, error detection and recovery from failures etc. These mechanisms
	must be provided in an integrated manner such that their use is easy and
	natural.
	
(ii)	Flexibility: these mechanisms should also be flexible, permitting
	application specific enhancements, such as type-specific concurrency and
	recovery control, to be easily produced from existing default ones.
	
(iii)	Portability: the system should be easy to install and run on a variety
	of hardware configurations.
	
  The computational model of atomic actions controlling operations upon objects
provides a natural framework for incorporating integrated mechanisms for fault-
tolerance and distribution. In Arjuna, these mechanisms have been provided
through a number of C++ classes; these classes have been organised into a class/
type hierarchy in a manner which will be familiar to the developers of
'traditional' (single node) centralised object-oriented systems. Arjuna is novel
with respect to other fault-tolerant distributed systems in taking the approach
that every major entity in the system is an object. Thus, Arjuna not only
supports an object-oriented model of computation, but its internal structure
is also object-oriented. This approach permits the use of the type inheritance
mechanism of object-oriented systems for incorporating the properties of fault-
tolerance and distribution in a very flexible way, permitting the implementation
of concurrency control and recovery for objects in a type specific manner. In
this aspect, Arjuna bears some resemblence to the Avalon/C++ system. Also,
Arjuna has been implemented without any changes to the underlying operating
system (Sun Unix), making it quite portable.

Publications:
	S. K. Shrivastava, G. N. Dixon, G. D. Parrington.
	Objects and Actions in reliable distributed systems.
	IEE Software Eng. Journal, September 1987.

	F. Panzieri, S. K. Shrivastava.
	Rajdoot: a remot procedure call mechanism supporting orphan detection
		 and killing.
	IEEE Trans. on Softwrae Eng. January 1988.
	
	G. N. Dixon.
	Object management for persistence and recoverability
	PhD Thesis, Technical Report 276, December 1988.
	
	G. D. Parrington.
	Management of concurrency in a reliable object-oriented system.
	PhD Thesis, Technical Report 277, December 1988.
	
	G. N. Dixon, S. K. Shrivastava.
	Exploiting type inheritance facilities to implement recoverability
	in object based systems.
	Proc. of 6th Symp. on Reliability in Distributed Software and Database
	Systems, Williamsburg, march 1987.
	
	G. N. Dixon, S. K. Shrivastava, G. D. Parrington.
	Managing persistent objects in Arjuna: a system for reliable
	distributed computing.
	Proc. of Workshop on persistent object systems, St. Andrews, Aug 1987.
	
	S. K. Shrivastava, L. Mancini, B. Randell.
	On the duality of fault-tolerant system structures.
	Workshop on experiences with dist. systems, Kaiserslautern, Lecture
        Notes in Computer Science, Vol 309, Sept. 1987
	 
	
	G. D. Parrington, S. K. Shrivastava.
	Implementing concurrency control in reliable distributed object-oriented
	systems.
	ECOOP88, Lecture Notes in Computer Science, Vol. 322.
	
        S.K. Shrivastava, G.N. Dixon, G.D. Parrington, F. Hedayati,
        S.M. Wheater and M. Little,
        The Design and Implementation of Arjuna.
        Technical Report, 1989.

	G. N. Dixon, G. D. Parrington, S. K. Shrivastava, S. M. Wheater.
	The treatment of persistent objects in Arjuna.
	ECOOP89, July 1989 (also, The Computer Journal, Vol. 32, Aug 1989).
================================================================================
| Date: Mon, 28 Aug 89 08:26:57 EDT
| From: king@grasp.cis.upenn.edu
| Subject: Timix

Name: Timix

Where: University of Pennsylvania

Contact:
	Mr. Robert King
	Department of Computer and Information Science
	University of Pennsylvania
	Philadelphia, PA  19104-6389
	king@grasp.cis.upenn.edu

Primary Investigator:
	Dr. Insup Lee
	Department of Computer and Information Science
	University of Pennsylvania
	Philadelphia, PA  19104-6389
	lee@central.cis.upenn.edu
	(215) 898-3532

Environment: MicroVAX processors connected via Ethernet and/or ProNET-10

Description:
	Timix is a real-time kernel being developed to support
	distributed applications, such as those found in the robotics
	domain.  It supports processes with independent address spaces
	that execute, communicate and handle devices within timing
	constraints.  The two basic communication paradigms supported
	are signals and asynchronous port-based message passing.
	New devices, which are directly controlled by application
	processes, can be integrated into the system without changing
	the kernel.  Dynamic timing constraints are used for scheduling
	processes and interprcess communications.

References:
	I. Lee, R. King, and R. Paul.  A Predictable Real-Time Kernel
	for Distributed Multi-Sensor Systems.  IEEE Computer (June 1989),
	22(6):78 - 83.

	I. Lee, R. King, and R. Paul.  RK: A Real-Time Kernel for Distributed
	System with Predictable Response.  Technical Report MS-CIS-88-78,
	Department of Computer and Information Science, University of
	Pennsylvania (October 1988).

	I. Lee, R. King, and X. Yun.  A Real-Time Kernel for Distributed
	Multi-Robot Systems.  Proceedings of the American Control
	Conference (June 1988), 1083-1088.

	I. Lee and R. King.  Timix: A Distributed Real-Time Kernel
	for Multi-Sensor Robots.  Proceedings of the IEEE International
	Conference on Robotics and Automation (April 1988), 1587-1589.

	I. Lee, R. King, and G. Holder.  Timix: A Distributed Kernel
	for Real-Time Applications.  Proceedings of the IEEE Workshop
	on Design Principles for Experimental Distributed Systems
	(October 1986).
================================================================================
| Date: Tue, 29 Aug 89 18:34:05 +0200
| From: mcvax!imag.fr!krakowia@uunet.UU.NET (Sacha Krakowiak)
| Subject: Guide


Sacha Krakowiak (krakowiak@imag.imag.fr) 

Guide : Object-Oriented Distributed System

Sacha Krakowiak

Bull-IMAG
2 rue de Vignate
38610 GIERES
FRANCE

[This is a joint research unit involving Bull and University of Grenoble]

+33 76 51 78 79

Availability: in development (prototype version should be available mid-90)
================================================================================
| From: banatre@irisa.fr
| Date: 5 Sep 89 13:47:26 GMT
| Subject: GOTHIC contact

	GOTHIC is a researh project aiming at providing an advanced
programming system for developping fault-tolerant distributed applications.
	Two main issues in this project:
	-The design and the realization of fault-tolerant multi-processor
workstations based on the active stable storage mechanism which incorporates
built-in mechanisms for the implementation of atomicity.
	-The design and the implementation of languages features which
reconciled object-oriented programming and parallelism. This is achieved
with the introduction of the concept of multiprocedure which allows the
expression of fine grain parallelism and its control within a generalized
procedural framework. Multiprocedures operate on fragmented objects which
are instances of abstract data types (classes), the concrete representation 
of such objects can be located on a set of virtual nodes. These language
features have been added to Modula-2.

	A first prototype version of the GOTHIC system has been designed and
implememted to run on a collection of fault-tolerant multiprocessor
workstations connected by a local area network.

Contact:	Michel BANATRE
		INRIA
		IRISA, Campus de Beaulieu,
		35042- RENNES cedex, FRANCE.
	
		tel:+33/ 99 36 20 00
		E-mail: banatre@irisa.fr

References.

BANATRE J.P., BANATRE M., PLOYETTE F.
The Concept of Multi-function: a General Structuring Tool for Distributing
Operating Systems.
Proc of 6th DCS, Cambridge, Mass, May 1986.pp.478-485.

BANATRE M., MULLER G., BANATRE J.P.
Ensuring Data Security with a Fast Stable Storage.
Proc. of 4th Int Conf on Data Eng. L.A., Feb. 88.

BANATRE J.P., BANATRE M., MULLER G.
Main Aspects of the GOTHIC Distributed System.
in R. Speth (ed), Research into Networks and Distributed Applications 
-EUTECO'88- Vienna, Austria, April 88.(North-Holland).

BANATRE J.P., BANATRE M., MORIN Ch.
Implementing Atomic Rendezvous within a Transactional Framework.
Proc of 8th Symp on Reliable Distributed Systems, Seatle, Oct 10-12 1989 
(to appear).
================================================================================
| From: hxt@cs.cmu.edu
| Date: 8 Sep 89 03:34:38 GMT
| Subject: RE: Real-Time Mach

Our group is working on a real-time version of Mach
here in CMU.
We are currently implementing a real-time thread model and
a better scheduler on Mach.
If you need further information, please contact

	Hide Tokuda
	School of Computer Science
	Carnegie Mellon Univ.
	Pittsburgh, PA 15213

Internet: hxt@cs.cmu.edu
Phone: (412)268-7672
FAX:   (412)268-5016
================================================================================
| Date: Wed, 13 Sep 89 11:43:35 K
| From: Christopher J S Vance <munnari!cs.adfa.oz.au!cjsv@uunet.UU.NET>
| Subject: RODOS - A Research Oriented Distributed Operating System

-----
Project Name: RODOS - A Research Oriented Distributed Operating System

Personnel:
	Dr Andrzej Goscinski		ang@csadfa.cs.adfa.oz.au
	Dr George Gerrity		gwg@csadfa.cs.adfa.oz.au
	Mr Christopher Vance		cjsv@csadfa.cs.adfa.oz.au
	Dr Chris Lokan			cjl@csadfa.cs.adfa.oz.au
	+ a few research students

Postal Address:
	Department of Computer Science
	University College
	University of New South Wales
	Australian Defence Force Academy
	Canberra ACT 2601
	AUSTRALIA

Please direct enquiries (including employment and enrolment
applications) to Dr Goscinski. 


The abstract of one of our Technical Reports should give some idea of
the flavour of our work:

The Design of RODOS: A Research Oriented Distributed Operating System
G. Gerrity, A. Goscinski, C. Vance, and B. Williams
Technical Report CS 88/17, September 1988, 13 pages

Abstract:

We know how to design an operating system for a centralized computer
system.  On the other hand, the study of distributed operating systems
is still in its infancy; their design and construction is still an open
problem.  Indeed, only the critical problem areas have been identified,
and there is little agreement among researchers about appropriate
solutions.  This project we are carrying out is an attempt at a uniform
attack on this problem.  The main goal of this report is the
presentation of the design process for a Research Oriented Distributed
Operating System (a test bed) called RODOS, and for investigating and
comparing alternative structures and methodologies for implementing the
components of a distributed operating system. 


Other RODOS Technical Reports:

The Design of the Kernel, Processes, and Communications for a
Research Oriented Distributed Operating System
C.J.S. Vance, CS 88/13, August 1988, 26 pages

Interprocess Communication Primitives in RODOS
C.J.S. Vance and A. Goscinski, CS 89/3, February 1989, 12 pages

The Logical Design of a Naming Facility for RODOS
C.J.S. Vance and A. Goscinski, CS 89/15, August 1989, 25 pages
================================================================================
| Date: Mon, 14 Aug 89 10:20:16 EDT
| From: scott@cs.rochester.edu
| Subject: Psyche

Contacts:
    Michael L. Scott             Thomas J. LeBlanc
    scott@cs.rochester.edu       leblanc@cs.rochester.edu
    (716) 275-7745               (716) 275-5426

    Department of Computer Science
    University of Rochester
    Rochester, NY  14627

The Psyche project at the University of Rochester is an attempt to
support truly general-purpose parallel computing on large shared-memory
multiprocessors.  We define "general-purpose" to mean that the
applications programmer must be able to do anything for which the
underlying hardware is physically well-suited.  Our current work
focuses on the development of an operating system that will support
the full range of parallel programming models in a single programming
environment.

Through five years of hands-on experience with a 128-node
multiprocessor, we have become increasingly convinced that no single
model of process state or style of communication can be appropriate for
all applications.  Traditional approaches ranging all the way from the
most tightly-coupled shared-memory model to the most loosely-coupled
form of message passing have applications for which they are
conceptually attractive.  Our goal is to allow each application, and in
fact each *part* of an application, to be written under the model most
appropriate for its own particular needs.

The Psyche user interface is based on passive data abstractions called
"realms."  Each realm includes data and code.  The code constitutes a
protocol for manipulating the data and for scheduling processes running
in the realm.  The intent is that the data should not be accessed
except by obeying the protocol.  To facilitate data sharing, Psyche
uses uniform addressing for realms -- each realm has the same virtual
address from the point of view of every process that uses it.  Which
realms are actually accessible to a process depends on the protection
domain in which that process is executing.  Each protection domain is
associated with a particular "root" realm, and includes any other
realms for which access rights have been demonstrated to the kernel.

Depending on the degree of protection desired, an invocation of a realm
operation can be as fast as an ordinary procedure call or as safe as a
remote procedure call between protection domains.  Unless the *caller*
insists on protection, the protected and optimized varieties of
invocation both look exactly like local subroutine calls.  The kernel
implements protected invocations by catching and interpreting page
faults.  Protected invocations cause the calling process to move
temporarily to the protection domain of the target realm.

Multiple models of process management are supported by moving most of
the responsibility for scheduling out of the kernel and into user
code.  The user can ask that a certain number of virtual processors be
assigned to a protection domain.  The kernel implements these virtual
processors (which we call "activations") via multiprogramming, but
otherwise stays out of process management.  It provides each activation
with signals (software interrupts) when scheduling decisions might be
needed, including when
(1) a new process has moved into the activation's protection domain
by performing a protected invocation,
(2) the activation's current process has left the protection domain
by performing a protected invocation,
(3) an invocation has completed and the calling process has returned,
(4) a user-specified timeout has expired, or
(5) a program fault has occurred (arithmetic, protection, etc.).

We are interested in very-large scale parallelism, which implies that
memory access costs will be non-uniform.  Much of our work to date has
focussed on the design of a virtual memory system that can balance the
needs of demand paging, automatic management of locality (the so-called
"NUMA problem") and Psyche-style realms and protection domains.  As of
summer 1989 have been writing code for about a year.  Our prototype
implementation runs on a BBN Butterfly Plus multiprocessor (the
hardware base for the GP1000 product line).  Personnel include two
faculty members, two professional staff members, and eight students.

We are collaborating with members of the department's computer vision
and planning groups on a major project in real-time active vision and
robotics, sponsored in part by a recently-announced NSF institutional
infrastructure grant.  Psyche forms the systems foundation for
high-level vision and planning functions in the robot lab.  We expect
the resulting applications experience to provide valuable feedback on
our design.  Our first robotics application (a balloon-bouncing
program) is now in the final stages of implementation.

References:
    %A T. J. LeBlanc
    %A M. L. Scott
    %A C. M. Brown
    %T Large-Scale Parallel Programming: Experience with the BBN Butterfly
    Parallel Processor
    %J Proceedings of the ACM SIGPLAN PPEALS 1988 \(em Parallel Programming:
    Experience with Applications, Languages, and Systems
    %C New Haven, CT
    %D 19-21 July 1988
    %P 161-172
    %X Retrospective on early work with the Butterfly;
    good background for understanding the motivation for Psyche

    %A M. L. Scott
    %A T. J. LeBlanc
    %A B. D. Marsh
    %T Design Rationale for Psyche, a General-Purpose Multiprocessor Operating
    System
    %J Proceedings of the 1988 International Conference on Parallel Processing
    %C St. Charles, IL
    %D 15-19 August 1988
    %P 255-262, vol. II \(mi Software
    %X Why we need a new kind of operating system to use shared-memory
    multiprocessors well

    %A M. L. Scott
    %A T. J. LeBlanc
    %A B. D. Marsh
    %T A Multi-User, Multi-Language Open Operating System
    %J Second Workshop on Workstation Operating Systems
    %C Pacific Grove, CA
    %D to appear, 27-29 September 1989
    %X How Psyche combines the flexibility and efficiency of an open
    operating system with the protection of a traditional O.S.

    %A M. L. Scott 
    %A T. J. LeBlanc
    %A B. D. Marsh
    %T Implementation Issues for the Psyche Multiprocessor Operating System
    %J Workshop on Experiences with Building Distributed and
    Multiprocessor Systems
    %C Ft. Lauderdale, FL
    %D to appear, 5-6 October 1989
    %X Issues that arise in structuring a large shared-memory kernel

    %A T. J. LeBlanc
    %A B. D. Marsh
    %A M. L. Scott
    %T Memory Management for Large-Scale NUMA Multiprocessors
    %R Technical Report
    %I Computer Science Department, University of Rochester
    %D 1989
    %X The design of a VM system to support demand paging, locality management,
    and Psyche abstractions

    %A M. L. Scott
    %A T. J. LeBlanc
    %A B. D. Marsh
    %T Evolution of an Operating System for Large-Scale Shared-Memory
    Multiprocessors
    %R Technical Report
    %I Computer Science Department, University of Rochester
    %D 1989
    %X The development of Psyche from first principles to concrete
    mechanisms
================================================================================
| Date: Mon, 14 Aug 89 07:55:34 PDT
| From: Peter Reiher <reiher@amethyst.Jpl.Nasa.Gov>

Parallel Discrete Event Simulation

Peter Reiher
Jet Propulsion Laboratory
Mail Stop 510-211
4800 Oak Grove Drive
Pasadena, CA 91109

(818) 397-9213

Availability: Soon to be released to Cosmic, NASA's software distribution system
================================================================================
| Date: Mon, 14 Aug 89 11:17:29 -0400
| From: edler@jan.ultra.nyu.edu (Jan Edler)
| Subject: NYU Ultracomputer

Jan Edler
NYU Ultracomputer Research Lab
715 Broadway, 10th floor
New York, NY 10003
(212) 998-3353
edler@nyu.edu
================================================================================
| Date: Mon, 14 Aug 89 14:04:10 +0200
| From: Sape Mullender <sape@cwi.nl>
| Subject: Amoeba

Project: Amoeba

Sape J. Mullender
Centre for Mathematics and Computer Science
Kruislaan 413
1098 SJ Amsterdam
Netherlands

sape@cwi.nl,  office phone: +31 20 592 4139,  fax: +31 20 592 4199

Andrew S. Tanenbaum
Faculty of Mathematics and Computer Science
Vrije Universiteit
De Boelelaan 1081
1081 HV Amsterdam
Netherlands

ast@cs.vu.nl
================================================================================
| Date: Mon, 14 Aug 89 13:06:13 PDT
| From: brent%sprite.Berkeley.EDU@ginger.Berkeley.EDU (Brent Welch)
| Subject: Sprite

Sprite is a network operating system being developed and used at UC Berkeley.
It currently runs on Sun3, Sun4, and DS3100 architectures, as well as the SPUR
multiprocessor.  The interesting features of Sprite include its shared network
file system, multiprocessor support, and a network process migration facility.
Sprite is basically 4.3BSD UNIX compatible, with various extensions.
Network distribution is handled entirely within the kernel.  A kernel-to-kernel
RPC protocol is used for high-performance network communication.  The file
system provides a uniformly shared, location-transparent name space.  High
performance file access (30% faster than NFS in long running benchmarks) is
achieved by using large main-memory caches on both client and server machines.
Diskless workstations are first class Sprite citizens.  Files, devices, and
pseudo-devices (user-level service applications) have names in the file system.
Devices and pseudo-devices can be located anywhere in the network; they do not
have to live on the file server that implements their name. Pseudo-devices are
used to implement at user level various system services, including a
TCP/UDP protocol server and a X11 window system server.  Pseudo-file-systems
are used to implement NFS access via a user-level service application.
Process migration lets running processes move between hosts of identical CPU
architecture.  A parallel make facility is used to exploit this capability.
Multiprocessor support means that the kernel itself is multi-threaded,
and more than one user process can execute concurrently in the kernel.
Sprite also allows multiple execution threads in a user process, and it
provides monitor-style synchronization primatives.
These features are all in day-to-day use by a small network of 3 servers
and about 25 clients.  All Sprite development is done on Sprite, and the
Sprite system sources have been on Sprite disks for over two years now.
Sprite is/will be in the public domain, although we haven't released it yet.
Sprite is also being used as the OS platform for the RAID disk array
and XPRS data base projects. These projects focus on high performance
I/O subsystems and their use by a database system.  Currently there is
a prototype disk array server (RAID == Redundant Array of Inexpensive Disks)
that runs Sprite, and the Postgress data base server will be ported to Sprite
this fall.  Members of the Sprite project can be reached via email at:
	spriters@ginger.Berkeley.EDU
================================================================================
| Date: Mon, 14 Aug 89 20:17:46 EDT
| From: beers@tcgould.TN.CORNELL.EDU (Jim Beers)
| Subject: Trollius

James R. Beers, Jr.		(beers@tcgould.tn.cornell.edu)

Trollius -- MIMD OS esp. for Unix hosted transputers.

James R. Beers, Jr.
Advanced Computing Facility
Cornell Theory Center
265 Olin Hall
Ithaca, NY  14851

(607) 255-9393

Availability: 2.0 available soon.
================================================================================
| Date: Tue, 15 Aug 89 13:46:08 +0200
| From: Peter Schlenk <ucbvax!decwrl!fauern!immd4.informatik.uni-erlangen.de!schlenk@ucscc.UCSC.EDU>
| Subject: Object-Oriented Distributed Operating System

Peter Schlenk	(schlenk@immd4.informatik.uni-erlangen.de)
Object-Oriented Distributed Operating System

Peter Schlenk
Universitaet Erlangen, IMMD IV
Martensstr. 1
8520 Erlangen
West-Germany

(0049) 9131/857269

Availability: in development.
================================================================================
| Date: Tue, 15 Aug 89 11:49:01 EDT
| From: snm%horus@gatech.edu (Sathis Menon)
| Subject: Clouds

The Clouds Project at Georgia Institute of Technology
(Distributed, object based Operating System project)

internet  snm@gatech.edu                       
uucp      {ihnp4,decvax,ulysses,akgua}!gatech!boa!snm  

Contacts:

	Sathis Menon 
	Research Scientist
	Distributed Systems Group
	School of ICS
	Georgia Institute of Technology
	Atlanta, GA 30332

	Partha Dasgupta 
	Assistant Professor
	School of ICS
	Georgia Institute of Technology
	Atlanta, GA 30332
================================================================================
| Date: Tue, 15 Aug 89 16:16:48 MST
| From: "Larry Peterson" <llp@arizona.edu>
| Subject: x-Kernel

Name: The x-Kernel Project

Where: University of Arizona

Contact:
	Dr. Larry Peterson
	Department of Computer Science
	University of Arizona
	Tucson, AZ 85721
	email: llp@arizona.edu
	phone: (602) 621-4231


Environment: Sun-3 workstations

Description:

	The x-kernel is a configurable operating system kernel in which
	communication protocols define the fundamental building block. 
	The x-kernel supports multiple address spaces, light-weight 
	processes, and an architecture for implementing and composing 
	network protocols. The primary objective of the x-kernel is to 
	facilitate the implementation of efficient protocols. In particular,
	the x-kernel supports the construction of new protocols from
	existing protocol pieces, it serves as a workbench for designing
	and evaluating new protocols, and it provides a platform for
	accessing large, heterogeneous collections of network services.

	The basic research problem addressed by the x-kernel is the level 
	to which kernel abstractions facilitate the implementation of 
	protocols. The key is that such abstractions must be rich enough 
	to accommodate a wide variety of protocols, yet implementable in 
	a way that does not impose a significant performance penalty on 
	any of the protocols. Our ultimate goal in this effort is to develop 
	operating system techniques that make protocol construction an every 
	day part of distributed applications programming.

References:

	N. Hutchinson, and L. Peterson. Design of the x-Kernel.
	Proceedings of ACM SIGCOMM `88 (Aug. 1988), 65-75.

	Hutchinson, N., Mishra, S., Peterson, L., and Thomas, V.
	Tools for Implementing Network Protocols. Software---Practice
	& Experience, to appear.

	N. Hutchinson, L. Peterson, M. Abbott, and S. O'Malley.
	RPC in the x-Kerenl: Evaluating New Desgin Techniques. 
	Proceedings of the Twelfth Symposium on Operating System
	Principles, (December 1989), to appear.

	N. Hutchinson, L. Peterson, H. Rao. The x-Kernel: An Open
	Operating System Design. Proceedings of the Second Workshop
	on Workstation Operating Systems (September 1989), to appear.
================================================================================
| Date: Wed, 16 Aug 89 11:12:50 PDT
| From: bcn@june.cs.washington.edu (Clifford Neuman)
| Subject: Prospero

B. Clifford Neuman      (bcn@cs.washington.edu)         Prospero

B. Clifford Neuman
Department of Computer Science, FR-35
University of Washington
Seattle, Washington 98195

(206) 543-7798

Prospero is a distributed operating system based on the virtual system
model: a new approach to organizing large distributed systems.  A user
centered view of the system is supported.  Users build their own
virtual systems from the resources available over the network.  Tools
are provided to help the user organize and customize their virtual
system.  Among the tools are the filter and the union link.  To make
it clear which namespace is to be used when resolving names, closure
is supported.  A namespace is associated with each object, and that
namespace is used to resolve names specified by that object.

        A prototype is running.
================================================================================
| From: anderson%charming.Berkeley.EDU@berkeley.edu (David Anderson)
| Date: Wed, 16 Aug 89 13:30:37 PDT
| Subject: DASH 

David P. Anderson
541 Evans Hall
UC Berkeley
Berkeley, CA  94720
(415) 642-4979
anderson@snow.Berkeley.EDU


                 THE DASH OPERATING SYSTEM


                     David P. Anderson

         Computer Science Division, EECS Department
             University of California, Berkeley
                    Berkeley, CA  94720

                      August 16, 1989


     The DASH project is doing research  in  system  support
for  applications  that 1) use ``multimedia'' (digital audio
and video); 2) are distributed, and 3) are interactive.   As
a  research testbed, we have developed a distributed operat-
ing system kernel.  The DASH kernel  supports  the  storage,
communication,  and  processing of data by processes in pro-
tected user-level address spaces.  It provides the  abstrac-
tion  of  data streams with guaranteed real-time performance
(throughput and delay).  As an example, consider an applica-
tion  that  reads  compressed full-motion video from a disk,
transmits the data across a  network  (and  perhaps  through
gateways),  then  decompresses  and displays it in a window.
Under DASH, if sufficient system resources are available  at
the  time when the application is started, then it will per-
form  correctly  regardless  of  any  subsequent  concurrent
activities.

     To meet the performance requirements of multimedia I/O,
the  DASH  system uses an abstraction of ``resources'' (CPU,
network access, video processors, etc.)  that  are  used  in
processing  data  streams.   Resources  can  be  accessed in
``sessions'' having parameters for  the  throughput,  delay,
burstiness, and reliability of the stream.  A session is, in
effect, a reservation of part of the resource.   These  ses-
sions  then can be combined to form ``end-to-end'' sessions.
This architecture allows the real-time capabilities of  net-
works  such  as FDDI and BISDN to be exploited by user-level
processes.  The  DASH  network  architecture  is  backwards-
compatible  with TCP/IP, allowing interoperation with exist-
ing systems.

     The DASH kernel is designed for  high-throughput  real-
time  communication.   The  kernel uses preemptive deadline-
based process  scheduling,  and  is  written  using  object-
oriented  structuring principles that make it easily modifi-
able and extensible.  It has a novel virtual  memory  design
that  allows  data  to  be  securely  passed between virtual
address spaces faster than in existing systems.
================================================================================
| Date: Thu, 17 Aug 89 13:18:40 EST
| From: munnari!bruce.cs.monash.OZ.AU!rdp@uunet.UU.NET (Ronald Pose)
| Subject: Monash capability kernel

	In the Department of Computer Science at Monash University we have
developed a shared memory multiprocessor with a rather unusual address
translation mechanism in which all processes share a single global virtual
address space. A capability-based operating system kernel controls access to
the global virtual memory which is designed to extend world-wide.
	Particular research interests are:
		Multiprocessor architectures
		Capability-based Operating System
		Unusual high-speed (40 MHZ) backplane bus
		User interfaces to the system
		Languages which can exploit the properties of the
		capability-based virtual memory and the multiprocessor
		architecture.
		Coupling the multiprocessor systems together across
		local and wide-area networks.

Contact Person:

Ronald Pose		ACSnet:	rdp@bruce.cs.monash.oz
Dept. Computer Science	UUCP:	..uunet!munnari!bruce.cs.monash.oz!rdp
Monash University 	ARPA:	rdp%bruce.cs.monash.oz.au@uunet.uu.net
AUSTRALIA 3168.		CSNET:	rdp@bruce.cs.monash.oz.au

Telephone:	+61 3 565 3903
Fax:		+61 3 565 4746

================================================================================
| Date: Thu, 17 Aug 1989 09:24-EDT 
| From: Doug.Jensen@K.GP.CS.CMU.EDU
| Subject: Alpha

E. Douglas Jensen
Concurrent Computer Corp.
One Technology Way
Westford, MA 01886
508-392-2999
edj@cs.cmu.edu. edj@westford.ccur.com

             The Alpha Real-Time Decentralized Operating System

Alpha is an operating system for the mission-critical integration and
operation of large, complex, distributed, real-time systems.  Only recently
have such systems become more common in industrial factory and plant
automation (e.g., automobile manufacturing), aerospace (e.g.,space
stations), and military (e.g., C3I) contexts.  They differ substantially
from the more widely known timesharing systems, numerically oriented
supercomputers, and networks of personal workstations.  More surprisingly,
they also depart significantly from traditional real-time systems, which are
for low-level sampled data monitoring and control.  The most challenging
technical requirements dictated by this application domain are in the areas
of:  satisfying real-time constraints despite the system's inherently
stochastic and nondeterministic nature;  distributed programming and
system-wide (inter-node) resource management;  robustness in the face of
failures and even attacks;  and adaptability to a wide range of
ever-changing requirements over decades of use.  Satisfying these entails
unconventional design and implementation tradeoffs.

In Alpha's distributed programming model, activities correspond to threads,
which execute concurrently in otherwise passive objects, and cross object
(and, transparently and reliably, node) boundaries by means of operation
invocation;  they carry with them attributes such as urgency, importance,
and relibility specified by the application.  Alpha instances cooperate to
manage the global resources of the entire system based on these attributes,
using best-effort resource management algorithms to ensure that as many as
possible of the most important aperiodic as well as periodic time
constraints are met, permitting graceful degradation in response to the
inevitable overloads.

To facilitate maintaining integrity of system and application distributed
data and programs despite physical dispersal, asynchronous concurrency of
execution, and hardware failures, Alpha includes exception handling
facilities, thread repair, and kernel-level mechanisms for real-time atomic
transactions and object replication.  Alpha uses policy/mechanism separation
to exploit application specificity in support of adaptability.  Departing
from common practice, Alpha's performance is optimized for the important
high-stress exception cases, such as failure or attack, rather than for the
normal, most frequent cases.

Alpha embodies results from nine years of research performed by the Archons
Project at Carnegie Mellon University, where a prototype was built from 1984
to 1987;  another copy has been successfully demonstrated with application
software written at General Dynamics Corp.  Alpha research is ongoing at CMU
and other academic and industrial institutions, but is now led by Concurrent
Computer Corp., where it continues to be sponsored in part by DoD.  A series
of next-generation designs and implementations will be delivered to various
Government and industry labs for experimental applications beginning in
early 1990.
================================================================================
| Date: Fri, 18 Aug 89 16:05:28 +0200
| From: shapiro@corto.inria.fr
| Subject: SOS

SOS is an experimental distributed object-oriented operating system.

SOS is based on a concept of "distributed object", implemented as a
"group" of elementary objects distributed among different address
spaces; members of a group have mutual communication privileges, which
are denied to non-members.  Access to a service may occur only via a
local "proxy", which is a member of the group implementing the
service.  Typically, a client gains access to some new service by
importing (migrating) a proxy for that service.

A prototype of SOS has been implemented in C++, on top of Unix.  It
supports object migration, persistent object, dynamic linking, and
arbitrarily complex user-defined objects (written in C++).  The system
services are accessed via a small set of pre-defined proxies.
Existing system services are: a distributed object manager, a name
service, a storage service, a communication service (allowing groups
to choose from a library of protocol types: datagrams, RPC, multicast,
atomic multicast, etc.).  Applications built using SOS include a
multimedia document manager and a UIMS.

SOS is thouroughly documented with a reference manual (similar to Unix
man) and an introductory programmer's manual.  Most of the code is in
the public domain, except for a few components derived from ATT code,
for which an ATT licence (Unix and C++) is necessary.

Contact: Marc Shapiro
INRIA, B.P. 105, 78153 Le Chesnay Cedex, France.  Tel.: +33 (1) 39-63-53-25
e-mail: shapiro@sor.inria.fr			(internet)
        ...!inria!shapiro			(uucp)
        inria!shapiro@uunet.uu.net 		(non-standard)


Here is a bibliography of some recent papers:

@InProceedings (ProxyPrinciple,
  Author = "Marc Shapiro",
  Title = "Structure and Encapsulation in Distributed Systems:
	   the {P}roxy {P}rinciple",
  Booktitle = "Proc.\ 6th Intl.\ Conf.\ on Distributed Computing Systems",
  organization = "{IEEE}", pages = "198--204",
  Address = "Cambridge, Mass. ({USA})",
  Year = 1986, Month = May)           

@InProceedings (sos:sigops86,
  author = "Shapiro, Marc",
  title = "{SOS}: a distributed Object-Oriented Operating System",
  booktitle = "2nd {ACM SIGOPS} European Workshop, on ``Making
	       Distributed Systems Work{''}",
  address = "Amsterdam (the Netherlands)",
  year = 1986, month = sep,
  note = "(Position paper)"
  )

@techreport(sos:v1-recueil,
  author = "Shapiro, Marc and Abrossimov, Vadim and Gautron, Philippe
	    and Habert, Sabine and Makpangou, Mesaac Mounchili",
  title = "Un recueil de papiers sur le syst\`{e}me d'exploitation
	   r\'{e}parti \`{a} objets {SOS}",
  institution = {Institut National de la Recherche en Informatique et
		 Automatique},
  address = {Rocquencourt (France)},
  year = 1987, month = may, number = 84,
  type = "Rapport Technique"
  )

@InProceedings{loo:C++:286,
  author =      "Philippe Gautron and Marc Shapiro",
  title = 	"Two extensions to {C++}: A Dynamic Link Editor and
		 Inner data",
  booktitle =   "Proceeding  and additional papers, {C++} Workshop",
  year = 	1987,
  page =        "23--34", 
  organization = 	"USENIX",
  address = 	"Berkeley, CA ({USA})",
  month = 	nov
}

@InProceedings{pro:sos:314,
  author = 	 "Mesaac Makpangou and Marc Shapiro",
  title = 	 "The {SOS} Object-Oriented Communication Service",
  booktitle = 	 "Proc.\ 9th Int.\ Conf.\ on Computer Communication",
  year = 	 1988,
  address = 	 "Tel Aviv (Israel)",
  month = 	 "October--November"
}

@InProceedings{sos:315,
  author = 	"Marc Shapiro",
  title = 	"The Design of a Distributed Object-Oriented Operating
		 System for Office Applications",
  booktitle = 	"Proc.\ Esprit Technical Week 1988",
  year = 	1988,
  address = 	"Brussels (Belgium)",
  month = 	nov
}

@InProceedings{pro:sos:321,
  author = 	 "Makpangou, Mesaac Mounchili",
  title = 	 "Invocations d'objets distants dans {SOS}",
  booktitle = 	 "De Nouvelles Architectures pour les Communications",
  year = 	 1988,
  editor = 	 "Guy Pujolle",
  pages = 	 "195--201",
  publisher = "Eyrolles",
  address = 	 "Paris (France)",
  month = 	 oct
}

@InProceedings{sos:prs:371,
  author = 	 "Marc Shapiro and Laurence Mosseri",
  title = 	 "A simple object storage system",
  booktitle = 	 "Proc.\  Workshop on persistent object systems",
  year = 	 1988,
  pages = 	 "320--327",
  editor =       "J. Rosenberg",
  address = 	 "Newcastle NSW (Australia)",
  month = 	 jan
}

@TechReport{sos:388,
  author = 	 "The {SOR} group",
  title = 	 "{SOS} Reference Manual for Prototype {V4}",
  institution =  {Institut National de la Recherche en Informatique et
		 Automatique},
  year = 	 1989,
  type =         "Rapport Technique",
  number = 	 103,
  address = 	 {Rocquencourt (France)},
  month = 	 feb
 }


@PhdThesis{makThesis,
  author = 	 "Makpangou, Mesaac Mounchili",
  title = 	 "Protocoles de communication et programmation par
		  objets~: l'exemple de {SOS}",
  school = 	 "Universit\'{e} Paris {VI}",
  year = 	 1989,
  address = 	 "Paris (France)",
  month = 	 feb
}

@InProceedings{nom:sos:391,
  author = 	 "J.P. Le Narzul and M. Shapiro",
  title = 	 "Un Service de Nommage pour un Syst\`{e}me \`{a}
		  Objets R\'{e}partis",
  booktitle = 	 "Actes Convention Unix 89",
  year = 	 1989,
  pages = 	 "73--82",
  organization = "{AFUU}",
  address = 	 "Paris",
  month = 	 mar
}

@InProceedings{sos:prs:c++:397,
  author = 	 "Marc Shapiro and Philippe Gautron and Laurence Mosseri",
  title = 	 "Persistence and Migration for {C}++ Objects",
  booktitle = 	 "ECOOP'89",
  year = 	 1989,
  address = 	 "Nottingham ({GB})",
  month = 	 jul
}

@TechReport{shapiro:experiences89:sor60,
  author = 	 "Marc Shapiro",
  title = 	 "{P}rototyping a distributed object-oriented {OS} on {U}nix",
  institution =  sor,
  year = 	 1989,
  type = 	 "Note technique",
  number = 	 "SOR--60",
  address = 	 rocquencourt,
  month = 	 may,
  note =         "To appear, Workshop on Experiences
		  with Building Distributed (and Multiprocessor)
		  Systems, Ft.\ Lauderdale FL (USA), Oct. 1989."
}

@InProceedings{chorus:mv:411,
  author = 	 "V. Abrossimov and M. Rozier and M. Shapiro",
  title = 	 "Generic Virtual Memory Management for Operating
		  System Kernels",
  booktitle = 	 "Proc.\ 12th ACM Symp.\ on Operating Systems Principles",
  year = 	 1989,
  organization = "ACM SIGOPS",
  address = 	 "Litchfield Park AZ (USA)",
  month = 	 dec
}
================================================================================
| Date: Fri, 18 Aug 89 14:09:55 EDT
| From: dibble@cs.rochester.edu
| Subject: Bridge

Contacts: Michael L. Scott scott@cs.rochester.edu
          Peter C. Dibble  dibble@cs.rochester.edu


Parallel computers with non-parallel file systems find themselves
limited by the performance of the processor running the file
system.  We have designed and implemented a parallel file system
called Bridge that eliminates this problem by spreading both data
and file system computation over a large number of processors and
disks.  Our design will adapt easily to a wide variety of
multiprocessors and multicomputers; our current implementation runs
on the BBN Butterfly.  To assess the effectiveness of Bridge we
have used it as the basis for several standard file-handling
applications, including copying, sorting, searching, and image
transposition.  Analysis and empirical measurements suggest that
for applications such as these Bridge can provide nearly linear
speedup on over 100 nodes.
================================================================================
| Date: Mon, 18 Sep 89 18:29:46 PDT
| From: Richard Guy <guy@cs.ucla.edu>
| Subject: FICUS project

Project: FICUS distributed file system

Abstract:
FICUS is a research effort at UCLA to demonstrate the feasibility and
desirability of using a single, large, transparent file system structure
for the entire DARPA Internet community.  The sheer scale involved
(60,000+ nodes in 1988) challenges many of the assumptions underlying
existing transparent file systems (eg, LOCUS, NFS, Andrew).  A closely
related goal is to use general-purpose data replication techniques to
provide high availability and reliability for file system contents.

Contact: Richard Guy
	 3804-F Boelter Hall
	 UCLA Computer Science
	 Westwood, CA 90024
	 213/825-2756
	 guy@cs.ucla.edu -or- ficus@cs.ucla.edu

	 Tom Page
	 3804-D Boelter Hall
	 213/206-8696
	 page@cs.ucla.edu

	 Gerald J. Popek
	 3731H Boelter Hall
	 213/825-7879
	 popek@cs.ucla.edu

Availability: in design/prototype stage.