[net.lang.st80] symmetry and multiple inheritance

eppstein@columbia.UUCP (David Eppstein) (03/29/85)

As I see it, Smalltalk currently has several problems.  I will
describe them first, then give some ideas for possible solutions.

 - Messages are not symmetric in their arguments.
   If a message needs to do different things for different pairs of objects 
   (i.e. real+int addition versus real+real) different mechanisms must
   be used to distinguish the cases depending on whether the
   difference in types is in the first or later arguments.

 - Hierarchical rather than multiple inheritance.
   There is no way to, say, have the class of rational numbers be a
   subclass of both the real numbers and the algebraic numbers.
   I have heard rumors that this is no longer true of Smalltalk but I
   haven't seen any details of how this is done.

 - No scoping.
   Any message can be called from any other message.  This contrasts
   with the principles of information hiding and abstract data types.
   It does not seem difficult to solve, so I won't go into it here.

There are two obvious blocks to symmetry and multiple inheritance: the
special variables self and super.  Self can be replaced by another
named argument, but super needs to be replaced by a more general type
coercion facility.  Also all named arguments in a message definition
need to be given types.

However there is a more complicated problem: with either symmetry
or multiple inheritance there can be several possible message
definitions with different combinations of arguments, but such that
no definition dominates any other definition.  For instance, if we had
defined addition for real+int and int+real, but not int+int, there
would be no obvious way to choose which definition to use.

Now let us assume multiple inheritance, but require that the partial
ordering of types under the ancestry relation forms a lattice.  For
hierarchical inheritance this is trivially true, but for multiple
inheritance we need to introduce an (easily automated) transformation
of the type structure: if we have types in the following structure

	    real numbers    algebraic numbers
                    |   \  /   |
                    |    \/    |
                    |    /\    |
		    |   /  \   |
	rational numbers    real algebraic integers

then we must introduce a new type, to ensure that (real numbers) and
(algebraic numbers) have a unique meet:

	    real numbers    algebraic numbers
                     |        |
	      real algebraic numbers
                     |        |
	rational numbers    real algebraic integers

This lattice on types of course induces a lattice on cartesion
products of types.  With this lattice structure, we have the mechanism
for introducing another restriction: if a message is defined for two
combinations of types, then it must be defined for the meet of the two
types.  For instance, if addition is defined on real+int and int+real,
then it must also be defined on (real+int ^ int+real) = int+int.  This
is not really a restriction; if we want int+int to run int+real, we
can simply use the type coercion facility in a dummy definition for int+int.

With these two restrictions we can find the definition to use for any
message applied to any combination of types: the definition whose type
is the meet of the types of all the definitions for that message that
can possibly be applied to the given arguments.  The first restriction
forces the meet to exist, and the second forces a definition to exist
there.  In an actual implementation the appropriate definition can be
found by a simple breadth first search.

Does all of this make any sense?  Is any of it new?  Useful?