[comp.arch] Capabilities and Object Oriented Programming

lamaster@ames.arc.nasa.gov (Hugh LaMaster) (03/20/90)

At some time in the distant past (say, 8-10 years ago...) there was a flurry
of activity in Capability based computer systems.  One of the ideas in
vogue at the time was *System Level Objects*.  That is, that it would be a
*good idea* if you could give separate capabilities to individual objects.
Some good results came out of some of these efforts, but, at that time, 
there really wasn't that much interest in OOP.

(Aside: by system level objects, I mean giving each object process-like
properties, with a separate address space, etc.  This sort of approach
is generally referred to as a Capability-Based system, although sometimes
people mean varous things by that term.)

Now, ignoring for the moment whether it is a *good idea* to do this, it could
have some performance problems :-)  OOP systems like Smalltalk can require
the creation and destruction of ~10,000 objects/sec/VUPS.  To support system
level objects, this would require modification of page tables at the same
rate.  (At least; it could also require a lot more things besides.  I guess
it depends on what capabilities are available.  But, at the least you need
to create an address space and identify it to the other objects in the system.)
Now, a TLB miss on some recent systems costs around 10-20 instruction
cycles.  But, how about a page table modification?  Does anyone know how
expensive creating or deleting an entry is?  (On example systems, such
as a MIPS or SPARC system running Unix, etc.)  I don't know whether the
cost of object creation would be as great as process creation, but I would
guess that process creation would set an upper bound, anyway.
You also need to be able to send messages from one object to the next
efficiently.  I don't know at what rate messages are sent in such systems.
Presumably System 38/etc. data might be of some help, but I have seen
precious little hard data published about that architectural family.

What would be the best page table organizations to support this?
(Inverted Page Tables, for example...)


  Hugh LaMaster, M/S 233-9,  UUCP ames!lamaster
  NASA Ames Research Center  ARPA lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035     
  Phone:  (415)604-6117       

moss@ibis.cs.umass.edu (Eliot Moss) (03/21/90)

I don't think you would use traditional page oriented schemes for "system
level objects", because the average object is much too small. For example,
Smalltalk objects average about 40 bytes. It is a real challenge to design
systems that do not swamp this with space and time overheads. The usual
approach is to use segments rather than pages (though one can and probably
should store segments within pages (or some large unit of memory) for backing
store purposes). This tends to suggest that the best support requires a rather
different hardware approach, assuming you want all the checks in hardware. If
you're willing to let the compiler/run-time system do some of the work
(consistent with some notions of the RISC philosophy) then you could use a
more conventional architecture ... *but* your language must be a safe language
or the safety of the whole thing falls apart. This means C is right out,
unless protection, etc., is not an issue (many PC class systems ignore
protection and seem to get by all right, probably because most people run
little other than reasonably well debugged, commercially available programs).
Well, that's all I have time for ....				Eliot
--

		J. Eliot B. Moss, Assistant Professor
		Department of Computer and Information Science
		Lederle Graduate Research Center
		University of Massachusetts
		Amherst, MA  01003
		(413) 545-4206; Moss@cs.umass.edu

baum@Apple.COM (Allen J. Baum) (03/22/90)

[]
>In article <45425@ames.arc.nasa.gov> lamaster@ames.arc.nasa.gov (Hugh LaMaster) writes:
>At some time in the distant past (say, 8-10 years ago...) there was a flurry
>of activity in Capability based computer systems.

You should also check out the Intel 80960XA. Descriptions are in the most 
recent Compcon proceedings.

--
		  baum@apple.com		(408)974-3385
{decwrl,hplabs}!amdahl!apple!baum

pcg@odin.cs.aber.ac.uk (Piercarlo Grandi) (04/09/90)

In article <c2LT02E099Yi01@amdahl.uts.amdahl.com> terry@uts.amdahl.com (Lewis T. Flynn) writes:

   This is called the "principle of least privilege in security circles. We
   followed it religiously when designing KeyKOS ojects and it proved to be
   really useful for several reasons. [ ... ] Super user is a
   meaningless concept in such a system.

Just as a funny note, there was an IBM capability machine that was so
religios on this issue that it did not require user programs to trust
the operating system; you could create an object that the operating
system could not access nor did know about. If you lost your capability
to this object, space used by it could only be reclaimed by a CE loading
special microcode... :-).
--
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

baxter@zola.ICS.UCI.EDU (Ira Baxter) (04/10/90)

Just to provide some fresh air, I am posting a copy of this reponse to
comp.arch.  There's way too much RISC vs. CISC discussion there :-{.

>> Do you have any suggestions/references on the use of capabilities
>> by operating systems, and particularly distributed operating systems.
>> In particular, I am interested in understanding how capabilities
>> are used in these systems, and if hardware support is appropriate
>> (how have capabilities been implemented on conventional hardware,
>> what approach has been taken so far to provide hardware support, and
>> what have been the problems and benefits of such support).
>>

Well, I am more interested in capabilities than I am an expert on it.
Having said that, now I will proceed to stick my foot in my mouth.

The major (real!) architectures I know about consist of the IBM Sward
architecture (which I think evolved into the IBM System 38 aka
AS440(?) ), the CMU Hydra (W. Wulf et al) operating system for C.MMP
(16 PDP-11s with a giant crossbar switch), and the CAP computer system
(Needham et al, U. of Cambridge).  I am afraid that I do not have
easily found references on these (I am packing, and those references
got packed first) but those pointers should at least get you close.
There is also a Digital Press book on Capability-Based Operating
Systems which seemed interesting (but it, too, is packed, sigh).
Last, but not least, there was an OS called Octopus that Lawrence
Livermore Laboratories fooled around with some on their heterogeneous
computer network; but I don't know where that work ended up.
[I'll bet some soul on the network has a pretty complete bibliography
on capability systems; care to post it?]

>> It seems to me that to effectively/efficiently implement a capability
>> based operating system, which most distributed systems are turning into,
>> (or object-oriented language as well) there is going to need to be some
>> form of hardware support.  All of the experience in the past with
>> capability based hardware (Intel's iAPX432, and the 80960XA that BiiN
>> used) was in supporting a secure, Ada type environment which has never
>> become very popular.

I have heard of BiiN, but don't know anything about it.
It isn't clear to me that these architectures died of "Ada-type
environment"; I would guess rather that they died of dog-slow performance,
and their failure to match the marketing requirements of running DOS
or Unix :-{{{{{{

Perhaps we have to wait for *next* internet worm to make
serious protection schemes marketable.  I think it is just a matter
of time.

>>                          The most valuable experience would be in supporting
>> one of the popular and current systems like Mach, V, Amoeba, etc., and
>> seeing what benefits you get out of the architecture.  It is papers
>> discussing capabilities in these contexts, especially relating to
>> implementation issues that I am interested in.

I found the CMU Hydra system to be one of the most interesting,
because they built capabilities on top of conventional hardware
(mostly by using trusted OS kernal routines); I think there were
a pretty good series of articles on it in SIGOPS sometime in late 70s,
early 80s.

The usual trick for conventional architectures implementing
capabilities is to use map hardware to partition process space into
code, data, and capability segments.   The capability segment
isn't really manipulable by the user process; rather the process uses
integer indices to reference the capabilities.  In this way,
the user processes have no idea of the physical content of
capabilities, nor can it diddle them.   Object handlers exist for
capabilities and remind one very much of all the OOP stuff now
going on.  The issues seem to be how to create new capabilities,
how to delete priveleges from capabilities when handing them on,
how to "amplify" priveleges in the object handlers (the Hydra stuff
is heavily into this), and how to handle dangling capabilities
(those for objects which have disappeared).

A topic little discussed is the inconvenience
of the capability segment; it is hard to pass capabilities around,
(you have to pass the integer index instead), and one has to somehow
allocate the slots in the capability segment (fixed allocation?
garbage collection?).  This is why I like the Amoeba encrypted
capability idea (if it is indeed what I think it is, I have not seen
any literature on it yet).   One can treat a capability just as
any other data object; this would work well on a conventional
RISC/CISC machine.   The loser is that to execute an operation on
an object, one would have decrypt the capability; here is where
architectural support would be a big win.

Consider a machine that had index registers for pointing to objects.
Loading a capability into an index register could decrypt it on
the spot (like the Intel 386 segment registers appear to do.).
If we insist that operations on the object be done *only* through
index registers (if you think of objects as memory,
this looks amazingly like RISC!), then one can dynamically check
the operations as they fly by.   Instructions to restrict
the capabilities represented by an index register would be cheap
because the capability can be stored internally in a decrypted form.
Storing the index register re-encrypts it.

>>
>> Also, it seems to me that there is confusion around capabilities as
>> an object in the operating system, and capabilities as a feature of the
>> architecture (a capability for a file being the former, and a capability
>> for an memory buffer being the latter).  There might (probably should) be
>> some relation between the two; some clarification/exploration of the
>> differences and similarities of these two ideas would also be interesting
>> to hear about.  I saw an article that someone posted describing the
>> difference as being between an I/O multiplexor and an IPC multiplexor...
>> do you know anywhere where this differentiation is explored/explained.

No.  But I can't see it should be different than software support for
floating point vs hardware implementation of the same; it is really a
matter of performance (with some protection issues thrown in).  The
interesting issue here is, how can one design something like IEEE fp
standard for capabilities, so that they can be used on dissimilar
"architectures" in a standardized way?

>> Brad Smith
>> PS - Any luck on the KeyKOS references?

Not yet.  But I'm not pursuing it very hard.

patrick@convex.com (Patrick F. McGehearty) (04/10/90)

In article <9004091928.aa26181@PARIS.ICS.UCI.EDU> baxter@zola.ICS.UCI.EDU (Ira Baxter) writes:
>
>I found the CMU Hydra system to be one of the most interesting,
>because they built capabilities on top of conventional hardware
>(mostly by using trusted OS kernal routines); I think there were
>a pretty good series of articles on it in SIGOPS sometime in late 70s,
>early 80s.

Some references:
Proceedings of the Fifth Symposium on Operating System Principles,
19-21 November, 1975 at Univ. of Texas at Austin

"Overview of the Hydra Operating System Development" by W. Wulf, R. Levin,
and C. Pierson.

"Policy/Mechanism Separation in Hydra" by R. Levin, E. Cohen, W. Corwin,
F. Pollack, and W. Wulf.

"Protection in the Hydra Operating System" by E. Cohen and D. Jefferson.
-----
The CAP system appears in the
Proceedings of the Sixth Symposium on Operating System Principles,
 16-18 November, 1977 at Purdue Univerisity

"The Cambridge CAP Computer and its Protection System" by R.M. Needham and
R.D.H. Walker.

"The CAP Filing System", by R.M. Needham, and A.D. Birrell.

"The CAP Project - An Interim Evaluation", by R.M. Needham
------
Other Proceedings on the various Operating System Principles are likely to
have worthwhile papers also, but I don't have immediate access to them.

I worked on measurement of the Hydra system, so I speak from experience when
I say that the implementation overhead of handling capabilities makes a
critical difference in how useful they are.  Hydra had a relatively high
overhead in entering the kernel due to the extensive checking that was
required to maintain domain boundaries, since little direct hardware support
was available for this purpose.  Thus, any capability based operation was
quite expensive.  In the original design, the interprocess communication
system was implemented outside the kerenl.  The overheads were so large as
to make the system virtually unusable.  Because of the expected need for
frequent communication in a multiprocessor environment, this system was
moved into the kernel.  If capability access had been cheap, this change
would not have been necessary.

I have my doubts about encrypted capabilities for performance reasons.  If
encryption is fast, then code cracking is feasible, which breaks the
protection.  If encryption is not fast, then capabilities will be bypassed
to avoid the overheads, again breaking the protection.