[comp.sw.components] Inheritance vs. component efficiency

wtwolfe@hubcap.clemson.edu (Bill Wolfe) (06/04/89)

   Inheritance has been touted by some as something which "facilitates
   software development ... a mechanism that allows the factorization
   of specifications shared by multiple pieces of software ... in order
   to define a software component that shares the whole specification 
   of another software component, ... indicate that the former inherits
   from the latter, and then specify the differential features of the
   component which is being defined."  But then a serious problem arises:
   inheritance serves not only to suck in the desired _specifications_,
   but the associated _implementations_ as well.  To overcome this,
   "inherited operations can be redefined with new implementations...
   new data components can be added to the internal structure."; but
   this leads to very serious problems.

   The first problem is that regardless of the fact that certain operations
   may have been overridden (redefined), the implementation of the other
   operations has not been modified to account for this fact.  Thus, a
   large percentage of the effort expended by certain inherited operations
   may well be devoted to the maintenance of aspects of the state of the
   base component which are crucial to the correct functioning of the
   operations which have now been overridden.  This introduces major
   inefficiency into each invocation of the inherited operations, and
   constitutes a severe performance penalty.  A cost which could have 
   been paid once and for all at development time (by designing the 
   component efficiently) is now being paid forevermore, at run time.

   The second problem is that the above-mentioned useless state 
   information also consumes space, penalizing us in both dimensions.

   The third problem is that the inherited operations were implemented
   with regard only to the operations provided by the base component.
   However, it is frequently the case that the addition of even a single
   operation has a dramatic impact on the nature of the best solution to
   the implementation problem; since the implementation of the inherited
   operations and the implementation of the non-inherited operations are
   mutually independent, the efficiencies which would have been realized
   had the designer implemented all the operations together are sacrificed,
   resulting again in time and space penalties.

   Since the basic rationale behind software components is the exploitation
   of economies of scale, it makes economic sense to seek an extremely good
   (or perhaps optimal) implementation with little regard to development 
   costs; these costs are spread over thousands or millions of applications 
   and are generally trivially recoverable.  Inheritance is a mechanism which
   seeks to minimize development costs at the probable expense of utilization
   costs, and is therefore something which is of little value to the developer
   of software components, who seeks to sell his product into a market whose
   economic characteristics are essentially those of a commodity market. 

   There may be niche sectors in which utilization costs are relatively
   unimportant and development costs relatively important, such as research
   organizations who are using unique software components as a experimental
   tool for the testing of certain ideas; for such isolated markets, the
   use of inheritance may be appropriate.  There may be situations in which
   the demand pull is so great that inheritance may be useful as a way to
   get an interim solution to market as quickly as possible.  But for the
   majority of the software components industry, an industry built around 
   the supply of highly optimized, widely used, standardized products into 
   a market which is largely commoditized, inheritance is an idea whose time 
   will never come.  


   Bill Wolfe, wtwolfe@hubcap.clemson.edu

eachus@mbunix.mitre.org (Robert Eachus) (06/20/89)

In article <5021@wiley.UUCP> simpson@poseidon.UUCP (Scott Simpson) writes:

->package Demo is
->    type A is limited private;
->    type B is limited private;

->    procedure X(P1 : A; P2 : B);
->end Demo;

->with Demo; use Demo;
->procedure Illustrate is
->    type S is new A;  -- You get "procedure X(P1 : S; P2 : B)".
->    type T is new B;  -- You get "procedure X(P1 : A; P2 : T)".
->end Illustrate;

->How do you get "procedure X(P1 : S; P2 : T)"?  You can't.

You can! The easiest way is:

     with Demo;
     procedure Illustrate is
       type S is new A;  -- You get "procedure X(P1 : S; P2 : B)".
       type T is new B;  -- You get "procedure X(P1 : A; P2 : T)".
       procedure X(P1: S; P2: T) is
       begin
         X(P1, B(P2));
       end X;
     begin ...
     end Illustrate;

Of course the natural Ada approach to this (assuming that the X's with
mixed parameters are unwanted is to write:

     generic
       type A is limited private;
       type B is limited private;
       with function... -- needed operations for A and B, usually with
			-- defaults.
     package Demo is
       procedure X(P1 : A; P2 : B);
     end Demo;

     with Demo;
     procedure Illustrate is
       type S is ...
       type T is ...
       package Demo is new Standard.Demo(S,T);
     begin ...
     end Illustrate;

     This inversion of defining library packages in terms of truly
abstract types and allowing the user to define the instance types, is
common in Ada.  The easiest way to think of it is that the Ada classes
are built "bottom up" instead of top down.  An Ada class could be
defined to be the set of types for which a particular generic can be
sensibly instantiated. (The sensibly is because certain instantiations
are permitted at compile time, but must raise an exception during
elaboration.)

					Robert I. Eachus

with STANDARD_DISCLAIMER;
use  STANDARD_DISCLAIMER;
function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (08/13/89)

From article <130200005@p.cs.uiuc.edu>, by johnson@p.cs.uiuc.edu:
> [repost of an old article]

   I think we've pretty well beaten this topic to death in this
   newsgroup, and I see no point in reposting OLD articles to which
   I have already responded.  There was a followup to my reply which
   I should have responded to, but did not, so I'll go ahead and cover
   it now: I mentioned that the situation which Ralph cited as one 
   in which run-time binding was appropriate was one in which tasking
   would be sufficient, and Ralph said that this was overkill.  But
   Ralph assumes a lot about how the multitasking will be implemented,
   which is entirely inappropriate.  Hilfinger and others have done
   work on compiler technology which automatically converts certain
   Ada multitasking situations into systems which perform equivalently
   with fewer threads of control.  Ralph also assumes that task creation
   will be a high-overhead situation; however, this is a characteristic
   of certain *operating systems*, not a characteristic of Ada.  Work is
   also being done on "Ada engines", systems which are specifically
   designed to, among other things, provide the support for lightweight 
   processes that is appropriate for an Ada environment.  

   I will have more to contribute on this topic when I return from the 
   Tri-Ada '89 conference, at which reuse is going to be a major topic.


   Bill Wolfe, wtwolfe@hubcap.clemson.edu

jesup@cbmvax.UUCP (Randell Jesup) (08/27/89)

In article <6254@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes:
>    Ralph also assumes that task creation
>   will be a high-overhead situation; however, this is a characteristic
>   of certain *operating systems*, not a characteristic of Ada.  Work is
>   also being done on "Ada engines", systems which are specifically
>   designed to, among other things, provide the support for lightweight 
>   processes that is appropriate for an Ada environment.  

	Such systems already exist, though they don't all have Ada compilers
for them (yet).  For example, the Amiga OS is based on lightweight tasks:
a task switch on a 7.16Mhz 68000 machine costs only 400us.  Creating tasks
takes only a few milliseconds, most of that stack-allocation time (rough
estimate off the top of my head).  Of course, these things go MUCH faster on
14-Mhz '020's or 25-Mhz '030's.

	MACH also has lightwieght thread support (though MACH threads are
'heavier' than Amiga tasks, in general.)

-- 
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, jesup@cbmvax.cbm.commodore.com  BIX: rjesup  
Common phrase heard at Amiga Devcon '89: "It's in there!"