[net.works] Apollo

Mishkin@YALE.ARPA (04/30/83)

From:     Nathaniel Mishkin <Mishkin@YALE.ARPA>

I'd like to bring the Apollo back to the attention of the people who seem to
be heading towards workstations running Unix (e.g. SUNs).  There are a couple
of reasons I feel this way:

    (1) The Apollo is an excellent system today and is likely to get even
        better.

    (2) Many people formed their opinions about the Apollo when the "released"
        system (i.e. the part they tell you about in manuals) was small.

For those of you not familiar with the Apollo, it is a 680xx based personal
workstation supporting virtual memory, multiple processes and windows on a
high resolution bitmap display.  Apollo workstations are connected by a 10mbit
ring network that supports a distributed file system.  Apollos have been
available for 2 years.  Apollo recently announced the "Apollo 2", which is
based on the 68010 and has at least the power of the Apollo 1 while being
smaller and much cheaper; the Apollo 2 runs the same software as the Apollo 1.

One of Apollo's (the company, not the machine) problems is "public relations":
they have been reluctant to "release" some of the layers of their system.  I
think I understand the motivation for this -- they don't want to be tied down
to a particular interface too soon.  However, an unfortunate side-effect of
this policy is that most of the outside world doesn't know that the system has
a very nice, clean, flexible and elegant internal structure.

By way of example, I offer a brief description of the internals of the
Apollo's inter-process communication scheme (IPC) and its distributed file
system.  Both of these aspects of the system ARE released, have proven
themselves, and are used heavily at Yale.  Both are to some extent based on
the good ideas of other (mostly "experimental") systems.

The Apollo-supplied IPC library (called "mailboxes") is built entirely in user
state.  That they were able to do this speaks well of the structure of the
system (it does NOT mean it is inefficient).  Mailboxes use the following
underlying kernel primitives:  ability for multiple processes on the same node
to map files; synchronization between processes on the same node via "event
counters"; low-level node-to-node communications.  Intra-node IPC is simple:
communicating processes map a common file onto which they superimpose
(EQUIVALENCE for you Fortran fans) a queue data structure.  Event counters
synchronize access to the queue and control the "blocking" of processes
accessing the mailbox.  When a process "writes" to the mailbox, it advances an
event counter that is being waited on by a process "reading" from the MBX.
The reader unblocks and pulls the data out of the queue.

Inter-node mailboxing is handled by a "mailbox helper" process that is the
proxy for the remote process; helpers talk to each other using the network
primitives and they talk to the local application processes via mailboxes.
The helper processes are invisible to the application processes.

Event counters deserve special mention.  The primitive the system actually
supports is this:  2 processes map the same file and agree by some convention
that a particular offset into the file is an event counter.  Then, a process
can "wait" for the counter to reach a particular value, or can "advance" the
counter.  A process blocks on a "wait" until another process "advances" the
counter.  The "wait" primitive takes a list of event counters so you can wait
on multiple events.

While you can (and do) write programs that create and manipulate "private"
event counters, a lot of programs use the event counters exported by various
libraries.  E.g. I can ask the stream I/O library for the event counter for a
particular "stream" (i.e. "file descriptor").  Of course, not all streams
(e.g. those on disk files) do anything interesting with the event counter,
but some do:  if I have a stream to a process's standard input and that stream
happens to be pointing to an input window, the counter gets advanced whenever
input is available.

The Apollo distributed file system is well designed.  Every file has a unique
ID (UID).  The basic file primitive is "map" which takes a UID, an offset into
the file, and a number of bytes to map.  The specified file bytes are mapped
into the callers virtual address space.  The kernel implements "map" for all
files, independent of what node the file is on.  All file access is ultimately
based on this primitive.  The scheme works well and is efficient enough to
work on large networks; Apollo's own network has over 90 nodes.

Given this structure, implementing diskless nodes was simple:  basically, all
the kernel needs is some disk space to page itself to.  Easily done:  create a
file (on a disked node) and do mapping operations to it.  The rest of the
diskless system just works:  it just turns out that all the user's file map
operations are non-local.  Essentially, none of the upper layers of the system
know anything about diskless nodes.

What all this good internal structure means to me is that the Apollo is a
system that can develop in many interesting ways simultaneously.  The base
structure is good.  It has been built from the ground up with expansion in
mind.

I look forward to hearing from other people praising the virtues of their
favorite workstations.

                -- Nat Mishkin

P.S.  for compatibility fans out there:  the Apollo has a Unix III
compatibility package.  Virtually all the Unix III programs run unmodified on
the Apollo.  People who are looking to Unix should consider this:  do they
need the Unix kernel or do they need the interface it provides and the user
state libraries?
-------