pcg@cs.aber.ac.uk (Piercarlo Grandi) (11/05/90)
I have been wondering about supporting OO programming in Ada using delegation and not inheritance. The following article follows some musings of mine relative to C++. Inheritance in OO language is about one bad and one clumsy thing: * the bad thing is that inheritance as prefixing was introduced in Simula 67 to make it possible to create composite objects using contiguity and not just access types, and this was needed only because Simula 67 would otherwise only have allowed object composition via access types. * the clumsy thing is that inheritance is a poor way to achieve reuse of definition and reuse of implementation, because it ties the two together in some gluey way, and one has to do funny things like abstract classes to separate the two. Delegation is another way of providing a cleaner algebra for doing the same things that inheritance does; it may be better for Ada. A particular form of delegation, which is equivalent to garden inheritance, is the consequence of the following rule: allow any subfield of a record to be named with any unique subset of its full path name. This effectively merges the definitions of the subrecords of a record, while retaining some of their distinctiveness if needed. The same could be allowed for packages. Example: TYPE r1 IS RECORD a : natural; b : float; END RECORD; TYPE r2 IS RECORD c : char; d : sring(32); END RECORD; TYPE r3 IS RECORD s1 : r1; s2 : r2; END RECORD; s3 : r3; We could then have: full paths abbreviated paths s3.s1.a s3.a s3.s1.b s3.b s3.s2.c s3.c s3.s2.d s3.d In other words, type 'r3' is the union of types 'r1' and 'r2'. A few Ada specific syntax issues need a bit of thinking (how to merge parameter lists, or package bodies), but they are far from insurmountable. So, this takes care of what is commonly called 'inheritance'. There is the other feature of OO programming, commonly called 'polymorphism', which is really runtime *definition* overloading, in the sense that the same function definition can be applied to various different types and the right implementation is chosen automatically at runtime depending on the type of the arguments. Ada function and operator overloading gives the compile time version of polymorphism for procedures and functions. We need only a mechanism to say that we want also run time overloading. Kind of bounded runtime overloading is achieved with records with cases. We could actually extend overloading (and genericity) to runtime types. It's not difficult to think of a few different syntaxes for this, which like the above proposal for merging the definitions of two records or packages, would be totally backwards compatible. For example one can currently overload statically area_of for squares and circles like this: TYPE square IS RECORD side : float; END RECORD TYPE circle IS RECORD radius : float; END RECORD FUNCTION area_of(s : IN square) RETURN float IS BEGIN RETURN s.side * s.side; END area_of; FUNCTION area_of(r : IN radius) RETURN float IS BEGIN RETURN r.radius*r.radius*3.14159278; END area_of; We want to express the notion that this should happen at run time; currently we must write: TYPE which IS (asquare,acircle); TYPE figure(w : which) IS RECORD CASE WHEN asquare => s : square; WHEN acircle => c : circle; END CASE; END RECORD; FUNCTION area_of(f : IN figure) RETURN float; BEGIN CASE f.w WHEN asquare => RETURN area_of(f.s); WHEN acircle => RETURN area_of(f.c); END CASE; END area_of; This is manually implemented bounded polymorphism. The 'manually' and 'bounded' are the objectionable parts. As an example of a possible syntax we could have an unbounded definition like this: FUNCTION area_of(f : IN ACCESS) RETURN float; -- no IS allowed! This would notify the compiler that 'area_of' is a runtime overloaded function over any access type to those types for which a statically overloaded area_of exists. This is akin to having a 'defgeneric' in CLOS, and similar implementation techniques can be used. When a polymorphic procedure is defined, the compiler could start to collect in a table the addresses of its implementations, and in another table the types on which these implementations are overloaded; making the two tables available at run time would enable resolution of the correct implementation. For example one could have a table of pointers to implementations indexed by a type code, and decorate each access value, or each accessed value, with the relevant type code when a NEW is executed. Hash tables indexed by both function code and type code could also be profitably used, or even tables indexed by strings, depending on the depth of compile and link time analysis desired, and on how many arguments are dynamically overloaded. One might use the unconstrained indicator <> to indicate unbounded overloading, in yet another syntax form; for example to define less unbounded types one could have: TYPE figure IS <>; FUNCTION area_of(f : figure) RETURN float; TYPE square IS RECORD f : figure; side : float; END RECORD TYPE circle IS RECORD f : figure; radius : float; END RECORD In which the compiler has more compile time information; this looks very much like abstract base classes in C++. The idea is that a type declared as '<>' is really implemented as an access value to a tavle of pointers to implementations, table to be indexed by the type code of the types that contain a field like it. ---------------- Ahem: in case the discussion above raises eyebrows, I will repeat here the gist of a previous posting of mine. We want to be able to mix and match definitions with implementations (and specifications, but Ada does not deal with them), and abstractions over them, so that we can reuse them orthogonally. For example, we want to be able to associate alternative implementations to a single definition, or to use the same implementation with different definitions, or to combine definitions or implementations with each other; in general we want to define algebras over definitions, implementations, specifications, so that arbitrary combinations and abstractions are possible. Examples: procedures are a way to abstract implementations over the specific entities they operate upon; overloaded procedures are a way to abstract procedure definitions over the type of arguments; type generic procedures are a way to abstract procedure implementations over the type of arguments. Functional languages have partial application, function combination, that are other possibilities in the algebras of definitions and of implementations respectively. Type overloading is a property of a definition; it means providing alternative definitions for the same procedure and letting the compiler choose the right one based on the compile time type of the parameters. Each definition has a statically associated implementation. Type genericity is a property of an implementation; it means that the implementation of a procedure is invariant with respect to a certain set of types of the arguments. This is only possible if the implementation only uses to manipulate those arguments operations overloaded on the same set of types. Both overloading and genericity can be either compile time or run time; they are compile time if the type does not depend on the program's inputs, run time if it does. Polymorphism is runtime type overloading. It may be bounded, in which case it is known at compile time that the type will belong to a certain finite set, unbounded, when it will belong to certain infinite set (e.g. it is know it is an access type, but not which), or unlimited, in which nothing is known about the runtime characteristics of the type. Runtime genericity is when all the operations on arguments in the procedure implementation are polymorphic. It can also be classified as bounded, unbounded or unlimited. In Ada we have compile time overloading, compile time genericity; and a bounded form of runtime overloading, with case records, in which the types of the possible cases are known at compile time. What we need is some way to have unbounded or unlimited runtime overloading, and runtime genericity will be a consequence. In Lisp we have bounded genericity, in that (approximately) the set of types is not expandable, but any primitive works on any type (very approximately). In C++ we have compile time overloading and unbounded runtime overloading (virtual functions are defined on all classes derived from the same class), which may become unbounded runtime genericity if a virtual function only calls other virtual functions. In Smalltalk we have compile time overloading (two classes can define the same method protocol) with run time overloading (the actual method for a message send depends on both the protocol and the runtime type of the receiver), and run time genericity (all message sends are run time overloaded). -- Piercarlo "Peter" Grandi | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
johnson@m.cs.uiuc.edu (11/05/90)
Peter Grandi's idea for object-oriented Ada was pretty interesting and seemed fairly reasonable. However, there are a few things that I disagree with. >the clumsy thing is that inheritance is a poor way to achieve reuse of >definition and reuse of implementation, because it ties the two together >in some gluey way, and one has to do funny things like abstract classes >to separate the two. The real problem is not that inheritance is a poor way to reuse implementation, but that it is a poor way to reuse specification. In my opinion, subclassing has little to do with subtyping. Languages like C++ or Eiffel that tie them together then have to struggle with the problems that this creates. This causes programmers to create abstract classes only to get around problems caused by the type checker. However, this is NOT the main purpose of abstract classes. Abstract classes are important because they let you reuse design. Abstract classes replace and are a big advance over program skeletons. Deferred operations are essentially blanks that a subclass implementer fills in, or design decisions that the subclass implementor is told to make. The use of abstract classes as a way to reuse design is very, very important, and language designers should make sure that they understand it will, to make sure that their new languages still support it. For example, I am not sure that Peter's Ada++ design will support it. Can a component defer the definition of operations to its delegators? The 1990 ACM Principles of Programming Language conference had several papers on object-oriented programming, two of which (one was mine) were on the fact that subclassing and subtyping were different. I recommend them to people interested in this question. Ralph Johnson -- University of Illinois at Urbana-Champaign
bertrand@eiffel.UUCP (Bertrand Meyer) (11/07/90)
From <77500062@m.cs.uiuc.edu> by johnson@m.cs.uiuc.edu (Ralph Johnson): > The real problem is not that inheritance is a poor way to reuse > implementation, but that it is a poor way to reuse specification. > In my opinion, subclassing has little to do with subtyping. Languages > like C++ or Eiffel that tie them together then have to struggle with > the problems that this creates. This causes programmers to create > abstract classes only to get around problems caused by the type checker. Let me disagree. What I find exciting about inheritance is precisely that it is two mechanisms combined into one: a module reuse mechanism and a subtyping mechanism. This is of course a natural consequence of the view that a class is two things: modular facility and type. This view is applied to its full extent in Eiffel, where there is no other modular facility than the class, and every possible type is based on a class. The rules applying to inheritance (e.g. information hiding rules, the covariant conventions for typing etc.) follow directly. The above two purposes of inheritance are not equally useful in all cases. Sometimes you want inheritance exclusively as a way to reuse the facilities of an existing module (as when you inherit from the Eiffel library classes EXCEPTIONS, which enables the authors of descendant classes to adjust the exception handling mechanism to their specific needs, or MEMORY, which gives them facilities for fine-tuning garbage collection and memory management). At other times, you will use inheritance solely for subtyping, as when defining FILE as an heir to DEVICE. In many cases, however, you want a little of both. A typical example is the library class STORABLE, used to make object persistent (storing and retrieving entire object structures): this may be viewed both as a ``library'' of persistence facilities, and as the abstract data type describing persistent object structures. I do not understand Professor Johnson's last comment about creating abstract classes ``only to get around problems by the type checker''. This does not relate to anything in my experience. In fact, I have often heard objections to static typing (often from people used to programming in Smalltalk, Objective-C or C++) based on the argument that strong typing limits the expressive power of object-oriented programming. Whenever I have tried to go further and understand the real problem, it has turned out that the claimed counter-examples (in which typing constraints were supposed to be an obstacle) only existed because of the absence of one or more of (a) multiple inheritance (b) genericity (c) reverse assignment attempt and could be handled quite elegantly, with typing constraints fully in force, as soon as these mechanisms were available. I do not know, of course, what Professor Johnson really has in mind, but if anyone has a good ``impossible case'' for typing I would be glad to see it discussed on the net. (Yes, this is a challenge.) -- -- Bertrand Meyer bertrand@eiffel.com
craig@Neon.Stanford.EDU (Craig D. Chambers) (11/08/90)
In article <443@eiffel.UUCP> bertrand@eiffel.UUCP (Bertrand Meyer) writes: > This is of course a natural consequence of the view that a class >is two things: modular facility and type. This view is applied >to its full extent in Eiffel, where there is no other modular facility >than the class, and every possible type is based on a class. The >rules applying to inheritance (e.g. information hiding rules, the >covariant conventions for typing etc.) follow directly. I disagree that Eiffel's typing rules "follow directly" from treating a class as both a module (a collection of method implementations) and a type (a set of procedure/function interfaces). I think that the goal of a language's static type system should be to be able to guarantee statically that type-checked programs will never manipulate objects in ways that weren't intended (e.g. no "message not understood" or "access denied" errors), for as many reasonable programs as possible. Eiffel must not have this as its goal, because there exist well-known holes in its static type checking (e.g. covariant typing rules). Unifying modules and types into classes doesn't require these holes in type checking. Consider Trellis/Owl as a language with the same unification that preserves static type checking. However, unifying modules and types as classes (and unifying "submoduling" (code inheritance) and subtyping as subclassing) *does* lead to some sacrifices. Trellis/Owl sacrifices submoduling of classes that aren't legal supertypes (subtyping is considered more important than submoduling). Eiffel sacrifices static type checking, and so treats submoduling as more important than (correct) subtyping. Bertrand Meyer's recent proposals to extend Eiffel's type checking to plug the holes in the static type system effectively allow a subclass to be a submodule without being a subtype, thus separating the two concepts that supposedly are one thing. C++ has a limited way of doing this already with its private base classes. Perhaps the "right" approach is to use a single language construct to define a collection of interfaces and optionally a collection of implementations (called, say, a class), and have two ways of inheriting from classes, one as both a subtype and submodule and one as a submodule only. This leads to a similar conciseness of expression that is an advantage of the unified approaches of most popular OO languages (the programmer doesn't have to write separate type constructs and module constructs for common abstractions), but preserves the important distinction between subtyping and code inheritance. > (c) reverse assignment attempt What does this mean? > I do not know, of course, what Professor Johnson really has >in mind, but if anyone has a good ``impossible case'' for typing >I would be glad to see it discussed on the net. >(Yes, this is a challenge.) What about the examples I posted a few weeks ago as part of this ongoing discussion? They were perform:, become:, and the implementation of OrderedCollection in Smalltalk (see pp. 231-233 of the "Blue Book"). The OrderedCollection problem could be avoided if the language includes a default instance of each class (such as a null pointer), but null pointers (void references in Eiffel) just introduce a whole new class of run-time errors that cannot be detected statically (e.g. sending messages to null pointers). -- Craig Chambers
johnson@m.cs.uiuc.edu (11/09/90)
>bertrand@eiffel.UUCP >I do not understand Professor Johnson's last comment about >creating abstract classes ``only to get around problems by the >type checker''. This does not relate to anything in my experience. >In fact, I have often heard objections to static typing (often from >people used to programming in Smalltalk, Objective-C or C++) based >on the argument that strong typing limits the expressive power >of object-oriented programming. Whenever I have tried to go further >and understand the real problem, it has turned out that the claimed >counter-examples (in which typing constraints were supposed to be an >obstacle) only existed because of the absence of one or more of > (a) multiple inheritance > (b) genericity > (c) reverse assignment attempt >and could be handled quite elegantly, with typing constraints >fully in force, as soon as these mechanisms were available. I was not talking about programs that were impossible to type-check. I was just complaining about programs that created abstract classes with no implementation just so classes could multiply inherit from them and so have several interfaces. I consider this a bit ugly, but that is not the main problem. The main problem is that I keep running into people who think that this is the main purpose of abstract classes, whereas the REAL purpose of abstract classes is to reuse design, including abstract algorithms. Thus, abstract classes in Smalltalk almost always have useful code that is defined in terms of the undefined operations. I've seen quite a few abstract classes in C++ that don't. To reiterate, I was not claiming that type-checking limits expressiveness, just that types-as-classes seemed to encourage people to misunderstand the most important purpose of abstract classes. Ralph Johnson -- University of Illinois at Urbana-Champaign
bertrand@eiffel.UUCP (Bertrand Meyer) (11/09/90)
From <1990Nov7.220902.13393@Neon.Stanford.EDU> by craig@Neon.Stanford.EDU (Craig D. Chambers): > > I disagree that Eiffel's typing rules "follow directly" from treating > a class as both a module (a collection of method implementations) and > a type (a set of procedure/function interfaces). > > [Various incorrect statements.] > I don't really see the benefit of attributing to someone (me in this case) comments which are the exact reverse of what that person actually wrote. This defeats the whole purpose of any technical discussion - ever more regrettably that other aspects of Mr. Chambers's posting are worth discussing. So I'll just pass, referring any interested party to existing publications and previous extensive postings on comp.lang.eiffel and other places. > > (c) Reverse Assignment Attempt > > What does this mean? Think of Reverse Assignment Attempt as disciplined, controlled casting. The instruction y ?= x is valid only if the type of y conforms to the type of x. (``Conforms to'' roughly means ``is a descendant of'' in the sense of inheritance, with some extra care due to genericity and expanded types.) For normal assignment (x := y), conformance must be the other way. A Reverse Assignment Attempt such as the above, which assumes that x and y are references, has the effect of attaching to y the object attached to x (as any assignment should do), but only if x is attached to an object whose type conforms to the type of y. In all other cases (i.e. x is attached to an object of non-conformant type, or x is void, that is to say not attached to any object), the effect is to make y void. In a typed environment this is needed for situations when the programmer has (or thinks he has) the type information associated with certain object, but the software text does not have it (often because it has lost it). This happens particularly when dealing with persistent data: when retrieving an object structure (stored e.g. using the STORABLE class, which maps an entire data structure, with all references preserved, cycles and all, onto permanent storage), you will use general-purpose mechanisms, which can only assume the most general type (ANY). If you know, or think you know, the actual type of a retrieved object, you need the Reverse Assignment Attempt to manipulate that object under its proper type - after checking that your assumptions are indeed correct. The scheme is: value: ANY; -- ANY is the most general type, -- to which all types conform my_value: ASSUMED_TYPE; ... value := retrieve (data_base, key, ...) -- `retrieve' is a general-purpose mechanism, -- which can only assume ANY as type information -- for its result. my_value ?= value; if my_value.Void then -- My assumption was wrong! No object -- was retrieved from the database, -- or an object of the wrong type "Some error or special processing" else -- Proceed normally with ASSUMED_TYPE operations -- on the retrieved object: my_value.op1; my_value.op2; ... end - > The OrderedCollection problem could be avoided if > the language includes a default instance of each class (such as a null > pointer), but null pointers (void references in Eiffel) just introduce > a whole new class of run-time errors that cannot be detected > statically (e.g. sending messages to null pointers). This is correct, but unfortunately there is no way of dealing with these errors statically in a typed language that supports references. (At least there is no practical way unless one is prepared to include a general-purpose resolution theorem prover in the compiler.) Giving up references means giving up linked structures, cyclic ones etc. This does not seem realistic. If we do have references, then we need a void reference. Then the possibility of a program attempting to access a non-existent object (through a void reference) at run-time exists, and cannot be detected statically in the general case. Look for example at code of the following form (it is written in Eiffel, but it just as well be Pascal or Ada): m, n: INTEGER; x: SOME_REFERENCE_TYPE; ... m := prime (8); -- Function prime (i), part of the same class, returns -- the i-th prime, with 1 being the first prime. Here -- the result will be the eigth prime number, 17 n := prime (9); -- Result will be 19 if n - m = 2 * (n - m) then x.Create -- Here x will be non-void. If the test -- yielded false x would be void. end; x.some_operation_of_class_SOME_REFERENCE_TYPE There is simply no reasonable way (now or in the foreseeable future) to determine statically, as part of a compiling process, that x will indeed be non-void at the type of the call. This would require that the compiler be able to analyze the text of function `prime' (which is assumed to be a programmer-supplied function, written as part of the same class) and to prove that the difference between the values computed by that function for arguments 8 and 9 is twice n - m, that is to say 2. One could argue for restrictions on the use of references which would disallow code such as the above. But it is not clear what kinds of restriction are acceptable (i.e. would make static checking possible without prohibiting useful cases.) At best, we can hope to replace full-fledged program proving by data flow analysis, which is easier but not much more realistic. The approach taken by the designers of all languages supporting references or pointers is that the vacuity question (whether or not an entity will be void before application of an operation) has to be resolved at run time. As a consequence vacuity is NOT a typing property in Pascal, Ada etc. In Eiffel it was felt intellectually more satisfying to cast this property in typing terms, but the end result is the same. In fact the Eiffel approach may be characterized as follows: all type checking may be done statically with ONE exception - the vacuity test. This explains the rationale for the Reverse Assignment Attempt as described above: in some cases, such as the retrieval of persistently stored data, a type check must perforce be done dynamically since we are accessing data over which we have no control, and need to ``clear'' it, in the sense of security clearance, before we start acting on it as bona fide object-oriented data. We do not, however, want these (important but infrequent) cases to make static type checking impossible in all other ``normal'' cases. Then the solution provided by Reverse Assignment Attempt is to replace a type check (which should always be static) by the only dynamically checked property: vacuity. -- -- Bertrand Meyer bertrand@eiffel.com
craig@Neon.Stanford.EDU (Craig D. Chambers) (11/09/90)
In article <445@eiffel.UUCP> bertrand@eiffel.UUCP (Bertrand Meyer) writes: >From <1990Nov7.220902.13393@Neon.Stanford.EDU> by >craig@Neon.Stanford.EDU (Craig D. Chambers): >>In article <443@eiffel.UUCP> bertrand@eiffel.UUCP (Bertrand Meyer) writes: >>> This is of course a natural consequence of the view that a class >>>is two things: modular facility and type. This view is applied >>>to its full extent in Eiffel, where there is no other modular facility >>>than the class, and every possible type is based on a class. The >>>rules applying to inheritance (e.g. information hiding rules, the >>>covariant conventions for typing etc.) follow directly. >> >> I disagree that Eiffel's typing rules "follow directly" from treating >> a class as both a module (a collection of method implementations) and >> a type (a set of procedure/function interfaces). >> >> [Various incorrect statements.] >> > > I don't really see the benefit of attributing to someone >(me in this case) comments which are the exact reverse of what >that person actually wrote. This defeats the whole purpose of any >technical discussion - ever more regrettably that other aspects of >Mr. Chambers's posting are worth discussing. What are you saying? That you didn't make the comments that I attributed to you? Of course I included your comments verbatim without editing. I have your original posting (a reply to Ralph Johnson's posting) and my response on line if you'd like to check. Or are you saying that my analysis of Eiffel's "current" (i.e. as described in OOSC) typing rules is opposite of what is really the case? Surely you cannot be saying that Eiffel (OOSC) doesn't have holes in its static checking (even excluding void references). In your posting on comp.lang.eiffel you admit that there is the possibility of getting a run-time error because of Eiffel's (as in OOSC) covariant type rules and its visibility rules; I'd consider this a hole in a static type-checking system. Or are you saying that my interpretation of your recent posting on comp.lang.eiffel for extending the type checker to plug these holes is wrong? Admittedly, I didn't use your words for describing the fix, but (as occurs often in technical discussions) attempted to describe the effect of the change in other terms that reveals it as really something else. I still believe that your proposal is really the same as treating a subclass as a subtype of one of its superclasses iff the arguments of the subclass' methods are the same type as the corresponding argument types of the superclass, and if the visibility declarations are compatible. This gives up any covariant subtyping rules, replacing them with (a restricted form of) contravariant subtyping rules. I made this same observation in an earlier posting on comp.object and comp.lang.eiffel, but didn't get any response to the contrary. If you disagree with this view, then please post a reply with an example illustrating how I'm wrong. Please let me know which statements of mine you consider incorrect, at least by private mail. I can't find the mistakes to which you allude. >> The OrderedCollection problem could be avoided if >> the language includes a default instance of each class (such as a null >> pointer), but null pointers (void references in Eiffel) just introduce >> a whole new class of run-time errors that cannot be detected >> statically (e.g. sending messages to null pointers). > >This is correct, but unfortunately there is no way of dealing with >these errors statically in a typed language that supports references. >(At least there is no practical way unless one is prepared to include >a general-purpose resolution theorem prover in the compiler.) > >Giving up references means giving up linked structures, cyclic ones >etc. This does not seem realistic. If we do have references, then we >need a void reference. Then the possibility of a program attempting to >access a non-existent object (through a void reference) at run-time >exists, and cannot be detected statically in the general case. I strongly disagree that a language with references needs void references. I've always considered them a language design flaw, at least in higher-level languages (e.g. higher than C). See below for examples of languages that do not include void references. >The approach taken by the designers of all languages >supporting references or pointers is that the vacuity question >(whether or not an entity will be void before application of an operation) >has to be resolved at run time. As a consequence vacuity is NOT a >typing property in Pascal, Ada etc. In Eiffel it was felt intellectually >more satisfying to cast this property in typing terms, but the end result >is the same. This is not true. Many dynamically-typed languages exclude void references (e.g. Lisp, Smalltalk). These languages tend to use a nil object, which is tantamount to a void reference, and so are not good counter-examples. In Self we are rewriting our code to avoid using a generic nil object, always initializing variables to "real" objects. In some cases we have type-specific null objects that implement all the messages that should be understood by objects of the type, typically by doing nothing and terminating a recursion. The list-terminating object is a canonical example of a type-specific null object. A statically-typed version of Self could use the same technique to eliminate the need for void references. Some existing statically-typed languages, such as CLU and its successor Trellis/Owl, also exclude void references. In these languages, all variables must be initialized before use. None of these languages requires a run-time vacuity test. >In fact the Eiffel approach may be characterized as follows: all type >checking may be done statically with ONE exception - the vacuity >test. This explains the rationale for the Reverse Assignment Attempt >as described above: in some cases, such as the retrieval of persistently >stored data, a type check must perforce be done dynamically since we >are accessing data over which we have no control, and need to >``clear'' it, in the sense of security clearance, before we start >acting on it as bona fide object-oriented data. We do not, however, >want these (important but infrequent) cases to make static type >checking impossible in all other ``normal'' cases. Then the solution >provided by Reverse Assignment Attempt is to replace a type >check (which should always be static) by the only dynamically checked >property: vacuity. Even assuming that you resolve the holes in the type system by giving up on real covariant subtyping, all you've done here is to replace some static type errors with a single kind of run-time error. This doesn't make those classes of errors go away, just renames them all as vacuity errors. There are still run-time type checks in the system (e.g. for the reverse assignment operator) and potential subsequent run-time errors (e.g. void reference use) which would not be present in a language with complete static type checking (e.g. Trellis/Owl). You may argue that such languages make writing programs too hard, but the existing body of Trellis/Owl software might be a convincing counter argument. -- Craig Chambers
pcg@cs.aber.ac.uk (Piercarlo Grandi) (11/11/90)
bertrand@eiffel.UUCP: bertrand> I do not understand Professor Johnson's last comment about bertrand> creating abstract classes ``only to get around problems by the bertrand> type checker''. This does not relate to anything in my experience. On 8 Nov 90 17:17:00 GMT, johnson@m.cs.uiuc.edu said: johnson> I was not talking about programs that were impossible to type-check. johnson> I was just complaining about programs that created abstract classes johnson> with no implementation just so classes could multiply inherit from johnson> them and so have several interfaces. I consider this a bit ugly, johnson> but that is not the main problem. Arghhh. It is simply disgusting, IMNHO. johnson> The main problem is that I keep running into people who think johnson> that this is the main purpose of abstract classes, whereas the johnson> REAL purpose of abstract classes is to reuse design, including johnson> abstract algorithms. Pah. Pah. Pah. The REAL REAL :-) :-) purpose of abstract classes is to utterly confuse and complicate the life of the programmer, just like inheritance and the other ill consequences of adopting the Simula 67 OO model lock stock and barrel even if there is no need to be backwards compatible with Algol 60. johnson> Thus, abstract classes in Smalltalk almost always have useful johnson> code that is defined in terms of the undefined operations. johnson> I've seen quite a few abstract classes in C++ that don't. Why not do it properly and have *different* mechanism for abstraction over interfaces, over implementations and over specifications? Why hack to death inheritance, which is a profoundly unorthogonal concept? Overloading classes and inheritance is not just ugly, it generates all sorts of misconceptions, not just those that you describe. Consider the start of this discussion, OO in Ada. In Ada one uses generics to abstract implementations over types and procedures, not classes, and this is *good*. Indeed I am starting to reckon that classes are a bad way to attack the reuse/abstraction problem, and that other OO technologies, such as those in CLOS (or ACTOR) seem more promising. -- Piercarlo Grandi | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk Dept of CS, UCW Aberystwyth | UUCP: ...!mcsun!ukc!aber-cs!pcg Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk