johnson@p.cs.uiuc.edu (06/07/89)
I disagree with just about everything in Bill Wolf's article on inheritance. I have done a lot of object-oriented programming, including an optimizing compiler for Smalltalk in Smalltalk and a framework for operating systems in C++. First, inheritance is NOT primarily a way to reuse specifications. It is primarily a way to reuse code. The real question is, "what is the kind of code that is being reused?" It turns out that it can not only be code that describes an implementation, but a textual description (i.e., a code) that describes a design. There are at least three reasonable ways of using inheritance. One is inheritance to specialize a component. This is the kind that is talked about the most. The second is similar to taking an existing program and editing it into shape. In other words, you subclass the original class and redefine methods until it does what you want. This style of programming results in ugly class hierarchies and can lead to the inefficiencies that Bill Wolf complained about. However, it is very valuable during rapid-prototyping and helps lead to families of interchangeable components. The third way of using inheritance is to inherit from an abstract class, also called a deferred class. This is the most elegant use of inheritance, and is essentially using the superclass as a template to generate new classes. The first problem is that regardless of the fact that certain operations may have been overridden (redefined), the implementation of the other operations has not been modified to account for this fact. Thus, a large percentage of the effort expended by certain inherited operations may well be devoted to the maintenance of aspects of the state of the base component which are crucial to the correct functioning of the operations which have now been overridden. This introduces major inefficiency into each invocation of the inherited operations, and constitutes a severe performance penalty. A cost which could have been paid once and for all at development time (by designing the component efficiently) is now being paid forevermore, at run time. The second problem is that the above-mentioned useless state information also consumes space, penalizing us in both dimensions. Abstract classes leave important parts of the implementation undefined, so subclasses rarely inherit something that they don't need. In particular, abstract classes usually don't define much state information, so there is almost NEVER a space penalty. The third problem is that the inherited operations were implemented with regard only to the operations provided by the base component. However, it is frequently the case that the addition of even a single operation has a dramatic impact on the nature of the best solution to the implementation problem; since the implementation of the inherited operations and the implementation of the non-inherited operations are mutually independent, the efficiencies which would have been realized had the designer implemented all the operations together are sacrificed, resulting again in time and space penalties. All I can say is that this doesn't happen much in practice. This can certainly happen during rapid-prototyping, but it is easy to fix and is unlikely to happen in a mature class library with a lot of abstract classes. Since the basic rationale behind software components is the exploitation of economies of scale, it makes economic sense to seek an extremely good (or perhaps optimal) implementation with little regard to development costs; these costs are spread over thousands or millions of applications and are generally trivially recoverable. Inheritance is a mechanism which seeks to minimize development costs at the probable expense of utilization costs, and is therefore something which is of little value to the developer of software components, who seeks to sell his product into a market whose economic characteristics are essentially those of a commodity market. This statement completely flies in the face of reality. The problem with software today is not that it is too slow, but that it is too expensive to construct. 95% of the time, the people who are worrying about efficiency are worrying about the wrong thing. We don't need to make our software faster, we need to make it more reliable, easier to use, easier to change. In fact, most software that is written is run on only a few machines. Few programmers are writing software that will run on thousands of machines. Object-oriented programming in general, and inheritance in particular, helps make software more reliable, easier to use, and easier to change. Moreover, it does not have to make programs any slower. In fact, object-oriented programs can be faster than conventional programs even though being built out of components that you might criticize as being too general purpose. The first example is the Choices operating system framework, which is written in C++. C++ implementations are quite efficient. We are just now getting to the point where we can build a Unix-compatible version, so we have not done complete system performance comparisons. However, we have been able to benchmark various pieces, such as the kernel call mechanism or the file system. These pieces are all faster than Unix running on the same hardware. Further, we expect to use this set of components to build real-time operating systems as well, with performance equal to that of other real-time operating systems. Programmers who do not want to pay for a feature like virtual memory can easily replace the memory-management module with one that has much less overhead. On the other hand, we have been inventing all sorts of exotic virtual memory features, some of which are fairly expensive. Only those people who want the benefit have to pay the cost. Programmers can build customized versions of components by inheriting from an abstract class, which provides a template for their customized class and which ensures that the new class works with the existing classes. The second example comes from my optimizing compiler, which is written in Smalltalk. Smalltalk is is quite a bit slower than C, which is why we are building an optimizing compiler for it. However, some parts of the compiler are faster than equivalent programs written in C. In particular, our table-driven code generator converts our intermediate representation to machine language using hash tables, which are built into Smalltalk. The original C program used YACC to compare a statement in the intermediate language with the machine description. In both cases we were just reusing what was available, but the Smalltalk program is a lot faster. This is due mostly to the high quality of the Smalltalk class library. Another advantage of Smalltalk is the quality of the programming environment, since it is easy to find the performance bottlenecks in the system and fix them. Getting back to the issue of overemphasizing performance, we all know that most programs spend 80% of their time in 20% of the program. However, it takes just as long to write the 80% as the 20%. Thus, it makes sense to use techniques that produce inefficient code on the 80% and use the time you save figuring out how to make the 20% go faster. There is a lot I could say about object-oriented design and the importance of inheritance, but I don't have time. However, I will refer you to a paper I wrote with Brian Foote in V1 N2 of the Journal of Object-Oriented Programming called "Designing Reusable Classes". To summarize, inheritance is very important, and should not be casually discarded. Ralph Johnson - University of Illinois at Urbana-Champaign
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/10/89)
From article <130200002@p.cs.uiuc.edu>, by johnson@p.cs.uiuc.edu: > First, inheritance is NOT primarily a way to reuse specifications. > It is primarily a way to reuse code. The real question is, > "what is the kind of code that is being reused?" It turns out > that it can not only be code that describes an implementation, > but a textual description (i.e., a code) that describes a design. Be specific. What precisely are you referring to? (BTW, I didn't assert that inheritance was primarily a way to reuse specifications; in fact, my comments centered around the inheritance of the associated implementations...) > There are at least three reasonable ways of using inheritance. > One is inheritance to specialize a component. This is the kind > that is talked about the most. I presume that by this you refer to the practice of defining a subtype, which has all the functionality of the original type but is restricted in some way, such as having a restricted carrier space (range of possible values). If not, be precise. This practice does not lead to unacceptable inefficiencies. > The second is similar to taking an existing program and editing it > into shape. In other words, you subclass the original class and > redefine methods until it does what you want. (Note for readers unfamiliar with the terminology: "class" is New-Speak for "type", and "method" is New-Speak for "procedure or function"...) > This style of programming results in ugly class hierarchies and > can lead to the inefficiencies that Bill Wolfe complained about. > However, it is very valuable during rapid-prototyping and helps > lead to families of interchangeable components. My comments specifically indicated that the tactic may be useful in isolated areas in which development costs were important and utilization costs were not; however, this does not characterize the software components marketplace. Families of interchangeable components can be achieved by simply defining inter-type compatibilities, and does not require inheritance. > The third way of using inheritance is to inherit from an abstract > class, also called a deferred class. This is the most elegant use > of inheritance, and is essentially using the superclass as a template > to generate new classes. By this I presume you mean the inheritance of specifications; if not, be precise. > [...] it is frequently the case that the addition of even a single > operation has a dramatic impact on the nature of the best solution to > the implementation problem; since the implementation of the inherited > operations and the implementation of the non-inherited operations are > mutually independent, the efficiencies which would have been realized > had the designer implemented all the operations together are sacrificed, > resulting again in time and space penalties. > > All I can say is that this doesn't happen much in practice. This can > certainly happen during rapid-prototyping, but it is easy to fix and > is unlikely to happen in a mature class library with a lot of abstract > classes. Assuming that an "abstract class" consists of a specification and not an implementation, this is blatantly obvious. Since my comments focused upon the inheritance of implementations, we must consider the impact of adding operations upon the efficiency of the implementation; generally, the impact is very dramatic. The representation that served us well when we were supporting a standard queue abstraction becomes disastrous when our standard queue becomes a priority queue. It is this immediate and total obsolescence of implementation which leads to problems when one attempts to inherit implementations. > Since the basic rationale behind software components is the exploitation > of economies of scale, it makes economic sense to seek an extremely good > (or perhaps optimal) implementation with little regard to development > costs; these costs are spread over thousands or millions of applications > and are generally trivially recoverable. Inheritance is a mechanism which > seeks to minimize development costs at the probable expense of utilization > costs, and is therefore something which is of little value to the developer > of software components, who seeks to sell his product into a market whose > economic characteristics are essentially those of a commodity market. > > This statement completely flies in the face of reality. The problem with > software today is not that it is too slow, but that it is too expensive > to construct. This is the problem with *application* software. One of the best ways to speed up the development of application software is to make extensive use of highly efficient standardized software components. When one is building a software component which is to be reused millions of times, the cost of construction will be amortized over millions of applications. The economics of the situation demand that the efficiency of the component be maximized instead. > 95% of the time, the people who are worrying about efficiency are > worrying about the wrong thing. We don't need to make our software > faster, we need to make it more reliable, And the best way to do this is to rely upon proven software components... > easier to use, By incorporating human factors specialists into the design team... > easier to change. By constructing software in a modular manner. > In fact, most software that is written is run on only a few machines. > Few programmers are writing software that will run on thousands of machines. Oh, but I beg to differ. A large and rapidly growing number of programmers are writing software in Ada, a language which is rigorously standardized, allowing one to write software which will run on any machine having a validated Ada compiler. This amounts to millions of machines. In fact, without this level of standardization, it is difficult to imagine a viable software components industry. > [..] inheritance is very important, and should not be casually discarded. Perhaps in the rare situations in which component development costs are more important than utilization costs, such as research situations; however, for the commoditized software components industry, in which standardized software components are reused thousands or millions of times in many different applications, inheritance (as currently defined) is simply an inappropriate development mechanism. Bill Wolfe, wtwolfe@hubcap.clemson.edu
ram@wb1.cs.cmu.edu (Rob MacLachlan) (06/11/89)
Far be it from me to for a university research programmer to tell a real software engineer about how to develop reusable maintainable software. But... It seems to me that anyone trying to develop highly reusable software components has a thing or two to learn from object-oriented development systems. As I see it, the key barrier to reusing software is that systems don't naturally break down into atomic reusable components. Supposing I had this whizzy indexed library of frobs, and I wanted a green frobboz. But when I look, I probably won't find one. I could maybe make do with the yellow-green one, but I would have to work around it or modify it. The advantage of an o-o inheritance system is that it makes it very easy to reuse a component *with modifications*. If you have to modify a component without understanding how it works, then you are much better off with the structured modification techniques of inheritance and method combination than if you attempted arbitrary modification of the code. I personally think you are wacko if you think that there are going to be very many components reusable millions of times without modification, at least if defined in Ada. I can conceive of there being a "programming environment" someday that could substantially automate the process of customizing components, but it's going to have to pay a lot more attention to semantics than Ada does. Rob --
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/12/89)
From article <5186@pt.cs.cmu.edu>, by ram@wb1.cs.cmu.edu (Rob MacLachlan): > As I see it, the key barrier to reusing software is that systems don't > naturally break down into atomic reusable components. While out in industry, I found the reverse to be true; as long as one actively seeks to identify reuseability opportunities, there will be little difficulty finding them. Just about anything you do more than once is a good candidate for reuseability, after being structured into an appropriate abstraction. The problem is not that "systems don't naturally break down into atomic reuseable components"; in fact, they do so quite readily when considered in relation to similar systems. The problem is that many programmers (and project managers) do not view things with an eye toward reuseability; programmers either because their language doesn't support reuseability or because the concepts are slow to propagate into industry, and managers because they aren't being measured by anything other than the time and money required for the project itself, regardless of the potential value of spinoffs. > Supposing I had this whizzy indexed library of frobs, and I wanted > a green frobboz. But when I look, I probably won't find one. I could > maybe make do with the yellow-green one, but I would have to work around > it or modify it. The advantage of an o-o inheritance system is that it > makes it very easy to reuse a component *with modifications*. If you have > to modify a component without understanding how it works, then you are much > better off with the structured modification techniques of inheritance and > method combination than if you attempted arbitrary modification of the code. The same thing is possible through the use of Ada packages; a new package serves as an interface which hides both the calls to the old package and the new code which was added on top of it, thus preserving the integrity of the original abstraction and creating a new one. Do you seriously believe that any software engineer worth his/her pay is going to overlook the above strategy? Quite simply, "Abstraction is the fundamental means by which the computer scientist can combat complexity"; to assume arbitrary modification of code is to assume total disregard for this fundamental principle of computer science. Now if a single inheritance clause is used to obtain both specification and implementation, it would seem that any modification of a component's implementation would trigger the recompilation of everything depending on it. In contrast, the new Ada package can be added to the local library of software components immediately if it seems appropriate; when it becomes obvious that the component is going to have to be implemented more efficiently, Ada provides the abstract separation which allows us to replace the implementation in a modular fashion without recompiling any of the code depending upon the abstraction. The Ada package provides excellent support for abstraction, and it has yet to be demonstrated that inheritance is even necessary. > I personally think you are wacko if you think that there are going to be > very many components reusable millions of times without modification, at > least if defined in Ada. Why? Ada provides a very high level of support for abstract data types, so no problems there. There is a large and growing set of Ada packages which support specific domains (the classic math libraries are the basis for this increasingly important reuseability strategy), which extend reuseability into reuseable domain-specific knowledge; they certainly work rather effectively. What basis is there for claiming otherwise? > I can conceive of there being a "programming environment" someday that > could substantially automate the process of customizing components, but > it's going to have to pay a lot more attention to semantics than Ada does. The knowledge-based reuseability librarians discussed earlier in this newsgroup (Unisys STARS research) substantially automate the process of selecting existing components. When a component requires customization, it is conceivable that a knowledge-based customizer could make use of the same knowledge base used by the librarian to create an interface package appropriate for a particular user, while alerting component developers when a given modification is requested frequently enough to warrant efficient implementation. But these are AI topics which are largely independent of the underlying language. Unlike C++ and other "moving targets", Ada is the product of a language *engineering* effort. Users are assured of linguistic stability for ten years at a time, long enough for investments in the language to pay off. Ada 83 software will almost certainly be suitable for automatic Ada 9X translation, thus ensuring the continuity of that investment. In the interim between revisions, opportunities exist to incorporate new ideas which emerge in the form of research languages, if they make a significant new contribution and can be efficiently implemented; but so far, inheritance isn't holding up very well on either count. When we consider that Ada was designed in the late 70's (it took from 1980 to 1983 to get Ada through the ISO), its continuously accelerating popularity almost a decade later is quite remarkable, and a strong affirmation of the vision and skill of its designers. The concepts developed in _Abstraction_Mechanisms_And_Language_Design_ (Paul N. Hilfinger's ACM Distinguished Dissertation) will surely lead to an even higher level of support for the process of abstraction as Ada upgrades itself to support the 1990's. Nevertheless, given that support for secure data abstractions which can survive external multitasking as well as utilize multitasking internally, in conjuction with the no subsets, no supersets, ten-year stability essential for portability and for a software components industry, is still the exclusive domain of Ada 83, and that much research still needs to be done on ways to fully exploit parallelism in the implementation of abstract data types, it is quite clear that Ada 83 is a very powerful tool whose limits are much farther away than the limits of our current ability to fully utilize its capabilities. Bill Wolfe, wtwolfe@hubcap.clemson.edu
johnson@p.cs.uiuc.edu (06/13/89)
>> by me (Ralph Johnson) > by Bill Wolfe > (Note for readers unfamiliar with the terminology: "class" > is New-Speak for "type", and "method" is New-Speak for > "procedure or function"...) Class and type are not necessarily the same. Languages like C++ make them the same, but I think that is a mistake. It certainly is not true in Smalltalk or Lisp-based languages. Class doesn't mean type any more than it means package or module (which I have seen people claim). Methods and procedures are almost the same, except for the polymorphism induced by run-time binding of procedure calls. Also, Smalltalk predates Ada, so "New-Speak" is not appropriate. OOP maybe cultish or insular, but it is not new. >> This style of programming results in ugly class hierarchies and >> can lead to the inefficiencies that Bill Wolfe complained about. >> However, it is very valuable during rapid-prototyping and helps >> lead to families of interchangeable components. > My comments specifically indicated that the tactic may be useful > in isolated areas in which development costs were important and > utilization costs were not; however, this does not characterize > the software components marketplace. I'm not concerned about the component builders as much as the component users. Building an application is often similar to rapid-prototyping, because when you show it to your customer you find out that you were being misinformed when you did the requirments analysis. Further, the most important kinds of components are application specific, and so I expect that each company will continue to develop its own components, in addition to using a large supply of components developed outside. >> The third way of using inheritance is to inherit from an abstract >> class, also called a deferred class. This is the most elegant use >> of inheritance, and is essentially using the superclass as a template >> to generate new classes. > By this I presume you mean the inheritance of specifications; > if not, be precise. No. An abstract class is a class with some methods left undefined. Thus, it is not possible to make objects from it, only to make subclasses. The subclasses must define the undefined methods. That is as precise as I can get. It's pretty hard for someone who hasn't done any object-oriented programming to understand the ramifications of it. Abstract classes are a lot like using a generic package that is parameterized by a set of procedures. One big advantage of object-oriented programming is that superclasses can provide default implementations of methods. In fact, you can look upon a class as a generic package that is parameterized by ALL its procedures, since a subclass can redefine any of the methods that it is inheriting. >> We need to make software easier to change. > By constructing software in a modular manner. Modular software is good, but not enough. Inheritance lets you modify components that are not quite right. Object-oriented programming certainly promotes modularity, but it also promotes abstraction and classification of families of similar components. These are just as important. >> In fact, most software that is written is run on only a few machines. >> Few programmers are writing software that will run on thousands of machines. > Oh, but I beg to differ. A large and rapidly growing number of > programmers are writing software in Ada, a language which is > rigorously standardized, allowing one to write software which > will run on any machine having a validated Ada compiler. This > amounts to millions of machines. In fact, without this level > of standardization, it is difficult to imagine a viable software > components industry. I didn't say "COULD run on thousands of machines". Lots of languages let programmers produce portable code. The fact is, most programmers are application programmers, and that will continue to be true. Object-oriented programming is good because it makes THEIR life easier, not just the component developer. I am a great fan of software components. All object-oriented programmers are. I believe that object-oriented programming is the best current technology available to promote software reuse. That certainly includes Ada. Of course, it would be easy to fix Ada so that it is object-oriented, and several groups have done so, but the result is no longer official Ada. There is a certain company that has a great Ada programming environment. Their customers often call up and ask how a certain feature was implemented in the environment, since they have figured out that they need to do the same thing in their application. They are sheepishly told that the feature is implemented using an extension to Ada, and consequently the compiler that implements the extension is not true Ada and the company is not giving away the compiler. This extension essentially adds dynamically bound procedure calls to Ada.
ted@nmsu.edu (Ted Dunning) (06/13/89)
i hate to get a reputation for leaping all over william thomas wolfe's
postings (and in fact i actually agreed with most of what he said this
time), but....
In article <5740@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) writes:
...
Why? Ada provides a very high level of support for abstract
data types, so no problems there. There is a large and growing
set of Ada packages which support specific domains (the classic
math libraries are the basis for this increasingly important
reuseability strategy), which extend reuseability into
reuseable domain-specific knowledge; they certainly work rather
effectively. What basis is there for claiming otherwise?
there really isn't yet a basis for claiming that there are known
problems that need stronger techniques, BUT there are stronger
techniques that have been developed that allow much more powerful
forms of genericity than that provided in ada (ml is a good early
example). these improved forms can actually subsume inheritance and
can provide a much simpler interface to a generic package.
When we consider that Ada was designed in the late 70's (it
took from 1980 to 1983 to get Ada through the ISO), its
continuously accelerating popularity almost a decade later is
quite remarkable, and a strong affirmation of the vision and
skill of its designers.
now wait a minute.
don't you think that there is a possibility that the interest in ada
is mostly market driven? further, isn't there a distinct probability
the reason that people are only now beginning to seriously talk about
ada is that in spite of the heavy market drive (i.e. dod money),
reasonable ada compilers have only recently (last 3 years) become
available on machines that people are interested in using? what does
it mean if a language is so complex that it takes the resources of the
dod nearly 10 years to force the development of even moderate quality
compilers for common development platforms?
compare this development with the concurrent development of simpler
languages that include the important features without including the
bureaucratic drek. examples of such languages include modula [23],
c++, ml, common lisp and others. compare also the effort required.
does 2 or 3 orders of magnitude simpler implementation mean something?
simpson@poseidon.uucp (Scott Simpson) (06/13/89)
In article <5740@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes: >From article <5186@pt.cs.cmu.edu>, by ram@wb1.cs.cmu.edu (Rob MacLachlan): >> it or modify it. The advantage of an o-o inheritance system is that it >> makes it very easy to reuse a component *with modifications*. If you have > > The same thing is possible through the use of Ada packages; a new > package serves as an interface which hides both the calls to the old > package and the new code which was added on top of it, thus preserving > the integrity of the original abstraction and creating a new one. Do The same thing is not possible using layered Ada packages. You will still have to write code to handle the added state. Ada's inheritance model is quite limited compared to C++ and Smalltalk. Here are a couple of problems with Ada inheritance: 1) You only inherit a single type and all its operations. For example, package Demo is type A is limited private; type B is limited private; procedure X(P1 : A; P2 : B); end Demo; with Demo; use Demo; procedure Illustrate is type S is new A; -- You get "procedure X(P1 : S; P2 : B)". type T is new B; -- You get "procedure X(P1 : A; P2 : T)". end Illustrate; How do you get "procedure X(P1 : S; P2 : T)"? You can't. 2) You cannot add state to an Ada type and have inherited operations work. In C++ you can. This makes user interface construction using Ada not quite as clean as an object-oriented approach. User interfaces toolkits often have quite a lot of commonality (what Stroustrup calls the litmus test of OOP) and inheritance seems to close the gap between the abstract and concrete model quite well. To illustrate why Ada's inheritance models fails the OOP test, let's look at an example. Suppose I have an inheritance tree that looks like Button | Text Button | Radio Button Let me try to model it in Ada: -- Package is hypothetical. Operations and data are incomplete. package Button is type Button_State is record Enabled : boolean; -- Can be pressed. Chosen : boolean; -- Currently toggled on. Hit : boolean; -- Currently being pushed. end record; function Create(b : Button_State) return Button_State; procedure Delete(b : Button_State); procedure Draw(b : Button_State); procedure Enable(b : Button_State); procedure Disable(b : Button_State); end Button; with Button; package Text_Button is type Button_State is new Button.Button_State; type Text_Button_State is record Ancestor_State : Button_State; Text : String(1..80); -- String in button. Background : String(1..80); -- Background color. end record; function Create_Text_Button(tb : Text_Button_State) return Text_Button_State; procedure Delete(tb : Text_Button_State); end Text_Button; with Text_Button; package Radio_Button is type Text_Button_State is new Text_Button.Text_Button_State; type Radio_Button_State is record Ancestor_State : Text_Button_State; Off_Value : Integer; -- Value when turned off. end record; procedure Redraw(rb : Radio_Button_State); function Create_Radio_Button(rb : Radio_Button_State) return Radio_Button_State; end Radio_Button; with Radio_Button; use Radio_Button; procedure Main is Radio_Button_Instance : Radio_Button_State; begin -- Number of periods determines how far up in the inheritance tree -- relative to here the field we are looking for is. Radio_Button_Instance.Ancestor_State.Ancestor_State.Enabled := true; -- Delete(Radio_Button_Instance); -- Illegal! Delete(Radio_Button_Instance.Ancestor_State); -- Not what we want. end Main; Notice that once I added state, my operations no longer work. There are a few ways to get around this limitation: 1) Repeat operations at each level. 2) Try a different model. 3) Use a different language. We thought about using Classic Ada (a superset of Ada with Smalltalk type inheritance and dynamic binding) or C++. InterViews is a C++ toolkit done at Stanford which exploits C++s inheritance. It's structure is quite elegant. Scott Simpson TRW Space and Defense Sector oberon!trwarcadia!simpson (UUCP) trwarcadia!simpson@usc.edu (Internet)
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/13/89)
From ted@nmsu.edu (Ted Dunning): > don't you think that there is a possibility that the interest in ada > is mostly market driven? further, isn't there a distinct probability > the reason that people are only now beginning to seriously talk about > ada is that in spite of the heavy market drive (i.e. dod money), > reasonable ada compilers have only recently (last 3 years) become > available on machines that people are interested in using? what does > it mean if a language is so complex that it takes the resources of the > dod nearly 10 years to force the development of even moderate quality > compilers for common development platforms? Initially, compiler vendors concentrated on getting their compilers validated, efficiency be damned. This is an inevitable effect of the rigorous standardization and the prohibition of subsetting. Once the validated compilers began to appear (circa 1985), the focus then turned toward improving efficiency; much excellent work on global optimization techniques is now paying off in the form of the current "second-generation" Ada compilers. It is the growing non-DoD interest in Ada which has motivated vendors to target "machines that people are interested in using", as you put it. Certainly from a commercial perspective, Ada has been an easy language to invest in because of the stability and size of its market. However, I tend to disagree with the proposition that interest in Ada is mostly a consequence of DoD $$$; I know of many people (including myself) who have been highly interested in Ada since roughly 1980. As the most advanced member of the Algol family, Ada is the obvious choice for those who prefer that group of languages (the general progression being Algol -> Pascal -> Modula-N -> Ada). > compare this development with the concurrent development of simpler > languages that include the important features without including the > bureaucratic drek. examples of such languages include modula [23], > c++, ml, common lisp and others. First, this depends upon your perception of the "important features". Ada provides much that other languages don't give you; I view this as an advantage. Analyses of Ada which have been done in response to the people who have claimed it to be too large show that very little can be trimmed from Ada without reducing the available functionality, so the question condenses to "Are you willing to assume that you'll never need the things that a smaller language simply cannot provide?". As for "bureaucratic drek", I'm not entirely certain what you're referring to; I don't have to write memos to the Ada Joint Program Office, and they don't send them to me. But AJPO does guarantee me that what I write in Ada here will compile on any other Ada compiler, and I consider that a very important service. > compare also the effort required. does 2 or 3 orders of magnitude > simpler implementation mean something? If I'm looking for a language to implement as a student compiler project, perhaps. If I'm looking for a language to write in, I only care about whether or not I can find a decent compiler; given that many of them now exist, it's simply a matter of indifference. Bill Wolfe, wtwolfe@hubcap.clemson.edu
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/13/89)
From simpson@poseidon.uucp (Scott Simpson): >>> The advantage of an o-o inheritance system is that it >>> makes it very easy to reuse a component *with modifications*. >> >> The same thing is possible through the use of Ada packages; a new >> package serves as an interface which hides both the calls to the old >> package and the new code which was added on top of it, thus preserving >> the integrity of the original abstraction and creating a new one. > > The same thing is not possible using layered Ada packages. You will still > have to write code to handle the added state. [...] > Notice that once I added state, my operations no longer work. There are > a few ways to get around this limitation: > > 1) Repeat operations at each level. This is precisely what I had in mind. Note: "a new package serves as an interface which hides both the calls to the old package and the new code which was added on top of it". But, you wail, this will cost me a bit of typing!! Well, not necessarily. The same pseudo-problem arises when you write an Ada package specification and then supposedly have to retype all the stuff that was in the specification over in the body: function Umptysquat (Using : in Parameter_Type) return Result_Type; must be retyped as: function Umptysquat (Using : in Parameter_Type) return Result_Type is Result : Result_Type; begin -- function Umptysquat return Result; end Umptysquat; in the body, before you enter the next phase of going in to add the logic. Well, wouldn't you know it, a short little program can be written to take an Ada package specification and automatically generate a corresponding package body, with type definitions and the like recast as comments, and procedures and functions rewritten as stubs. The same thing can be done when you wish to add a layer of abstraction; simply have a program ask you for the name of the new data type(s) and automatically generate a higher- level package which implements itself through calls to the lower-level packages. Q.E.D., as they say in mathematics... Bill Wolfe, wtwolfe@hubcap.clemson.edu
simpson@trwarcadia.uucp (Scott Simpson) (06/14/89)
In article <5750@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes: >>From simpson@poseidon.uucp (Scott Simpson): >> Notice that once I added state, my operations no longer work. There are >> a few ways to get around this limitation: >> >> 1) Repeat operations at each level. > > This is precisely what I had in mind. Note: "a new package serves > as an interface which hides both the calls to the old package and > the new code which was added on top of it". I assume you mean you wish to incrementally wrap packages in this manner. If this is the case, you have to create new functions that are in one-to-one correspondence with the package at the previous level. This is error prone. Also, you will have to *add code* to handle the added state. In an OOPL neither do you repeat operations nor add code. You simply specify the relationship that one type is a subtype of another. This relationship is language enforced and language supported. It is not by convention. > when you wish to add a layer of abstraction; simply have a program ask you > for the name of the new data type(s) and automatically generate a higher- > level package which implements itself through calls to the lower-level > packages. Q.E.D., as they say in mathematics... Writing such a program is not necessary in an OOPL. How will this automatic code generation program know how to write code for the added state? Do you know of such a program? Scott Simpson TRW Space and Defense Sector oberon!trwarcadia!simpson (UUCP) trwarcadia!simpson@usc.edu (Internet)
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/15/89)
From article <130200003@p.cs.uiuc.edu>, by johnson@p.cs.uiuc.edu: > I'm not concerned about the component builders as much as the component > users. Building an application is often similar to rapid-prototyping,[...] > Inheritance lets you modify components that are not quite right. > [...] extension essentially adds dynamically bound procedure calls to Ada. From adamsf@cs.rpi.edu (Frank Adams): > I suspect that software components in object form will never be successful. > There is too much need to diddle with things, too little which can be used > as is. And inheritance with run-time binding is too often an inadequate > method of diddling. OK, it is becoming clear that there are some different sets of underlying assumptions here, so let's identify them for discussion: 1) It is fairly unlikely that any particular component will fit the application developer's requirements, hence they must be rapidly modifiable as with inheritance. For some reason, using a tool which automatically constructs a higher level of abstraction (for appropriate modification) isn't appropriate, even though this also isolates the specification from the implementation. 2) For some reason, run-time binding is necessary. 3) Even run-time binding may not be enough. Regarding 1), I have exactly the opposite perspective; in general, either a component is readily available (mostly the case), or it gets invented on the spot. Furthermore, in the very unlikely event that I need to slightly modify an existing ADT, a tool can be used to generate another level of abstraction and proceed from there. Regarding 2) and 3), I don't perceive any need for run-time binding. Even if we were to define classes of types (characterized by the operations which it has available) and allow procedure parameters to be described in terms of such classes, this still would simply amount to a more readable version of "this procedure is generic in the type of this parameter; auto-instantiate as necessary after determining at compile time the type of the actual parameter". So perhaps Ralph and Frank could provide some insight as to why they (apparently) hold the above assumptions, editing the above listed assumptions as necessary to reflect their actual positions... Bill Wolfe, wtwolfe@hubcap.clemson.edu
johnson@p.cs.uiuc.edu (06/15/89)
From billwolf%hazel.c@hubcap.clemson.edu (Bill Wolfe) > Well, wouldn't you know it, a short little program can be written to take > an Ada package specification and automatically generate a corresponding > package body, with type definitions and the like recast as comments, and > procedures and functions rewritten as stubs. The same thing can be done > when you wish to add a layer of abstraction; simply have a program ask you > for the name of the new data type(s) and automatically generate a higher- > level package which implements itself through calls to the lower-level > packages. Q.E.D., as they say in mathematics... This is called delegation, which is a common way of implementing inheritance in object-oriented languages. One problem is that if you inherit procedure X, which calls procedure Y, but redefine Y then the inherited version of X will call the wrong version of Y. This can be solved by copying all the code in the original class, not just calling it. True delegation uses run-time polymophism to solve this problem, which is extremely useful for code reuse but is not present in Ada. However, it can be simulated with a preprocessor. I assume that is what the various versions of "object-oriented Ada" do. Ralph Johnson
ted@nmsu.edu (Ted Dunning) (06/16/89)
there seems to be a collision of programming styles going on in this group. in particular, the edit/compile camp insists that run-time flexibility is not really needed since most of the work of instantiation and inheritance can be done at compile or edit time, while the interactive/run-time binding camp insists that this is not sufficient. in particular, william thomas wolfe puts the compile/edit side of the argument concisely when he writes Regarding 1), I have exactly the opposite perspective; in general, either a component is readily available (mostly the case), or it gets invented on the spot. Furthermore, in the very unlikely event that I need to slightly modify an existing ADT, a tool can be used to generate another level of abstraction and proceed from there. what he misses are the real reasons inheritance must be implemented in a more fundamental way than just a fancy editor. these reasons are parsimony in the source text, and flexibility in the program development phase. while it is true that having an editor write interface and implementation code effectively provides the necessary mechanisms of inheritance, this ignores the fact that the programmer is left with a LARGE mass of code to work with. anyone who has tried to write a new x-window widget that is only a trivial specialization of an old one can tell you that there is a HUGE amount of code that you have to learn to ignore. this overhead should be hidden from the programmer to simplify the reading of the code (even after you have learned to ignore huge chunks of stuff), and also to prevent inadvertent modification. secondly, having an inheritance mechanism built into the language will likely have the secondary consequence that the mechanism will be built into the debugges for the language. trying to debug an object oriented program without support from the debugger is similar to trying to use a debugger which does not support structure display. a debugger must avoid presenting the programmer with excessive detail as a default. Regarding 2) and 3), I don't perceive any need for run-time binding. Even if we were to define classes of types (characterized by the operations which it has available) and allow procedure parameters to be described in terms of such classes, this still would simply amount to a more readable version of "this procedure is generic in the type of this parameter; auto-instantiate as necessary after determining at compile time the type of the actual parameter". you can't be seriously saying the the equivalent of c++ virtual functions is not needed. it is crucial that you be able to pass around an object which is declared to be a relatively general object but which is in fact some unknown (to the code at run-time) specialization. if you aren't saying this, then you must be saying that the extent of genericity of a class can be determined at compile time. if so, consider a program that reads a textual description of objects to be instantiated. this is a simplification of what an interpreter for an object oriented language must do. how in a completely compile oriented environment can you provide consistent instantiation mechanism in this case? must you resort to the fabrication of an entire run-time class inheritance scheme from whole cloth? why is a language that requires this a good one? - of course, since no other language but ada is worthy, then the - simulation of object oriented languages is also not worthy :-)
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/16/89)
From article <5044@wiley.UUCP>, by simpson@trwarcadia.uucp (Scott Simpson): > I assume you mean you wish to incrementally wrap packages in this manner. > If this is the case, you have to create new functions that are in > one-to-one correspondence with the package at the previous level. This > is error prone. Also, you will have to *add code* to handle the added > state. In an OOPL neither do you repeat operations nor add code. > You simply specify the relationship that one type is a subtype of another. > This relationship is language enforced and language supported. It is not > by convention. > >> when you wish to add a layer of abstraction; simply have a program ask you >> for the name of the new data type(s) and automatically generate a higher- >> level package which implements itself through calls to the lower-level >> packages. Q.E.D., as they say in mathematics... > > Writing such a program is not necessary in an OOPL. How will this > automatic code generation program know how to write code for the added > state? Do you know of such a program? Incremental wrapping can be accomplished automatically, and can also serve to map a single new abstraction to N underlying abstractions. The basic approach is to specify to the program the underlying type(s) and the name of the new type you wish to create. The program analyzes the files containing the specifications of the underlying ADTs and creates a new specification containing the new type name and all the operations applicable to the underlying ADTs, eliminating duplicates. Next, the program creates the package body. The type is automatically implemented: package body New_Type_ADT is -- type New_Type is limited private; -- type New_Type_Descriptor; -- type New_Type is access New_Type_Descriptor; type New_Type_Descriptor is record Underlying_Type_A : First_Underlying_Type; Underlying_Type_B : Second_Underlying_Type; -- etc. end record; procedure New_Type_Operation_X (The_New_Type : in out New_Type) is -- Let's assume that this is an operation applying to -- both underlying types; then we call the underlying -- packages to update the state of both fields... begin -- New_Type_Operation_X First_Underlying_Type_ADT.New_Type_Operation (New_Type.Underlying_Type_A); Second_Underlying_Type_ADT.New_Type_Operation (New_Type.Underlying_Type_B); end New_Type_Operation_X; -- etc. for other operations, as appropriate... end New_Type_ADT; Now you can go in and add new states, make changes, etc. as necessary. Each procedure can be recoded as necessary, while the default of just calling an underlying abstraction is available in case you wish to employ it. We can still replace the implementation at any time without recompiling anything depending upon the specification. We retain total control of every detail of the implementation; if it is not appropriate to update all N subcomponents in the example above, then it's easy to just edit the implementation to suit your needs. In short, I consider the problem to be simply a question of tool availability, similar to the problem of automatically generating the compilable outline of a package body from the package's specification. In what way does this not satisfy all requirements? Bill Wolfe, wtwolfe@hubcap.clemson.edu
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/17/89)
From ted@nmsu.edu (Ted Dunning): > there seems to be a collision of programming styles going on in this > group. in particular, the edit/compile camp insists that run-time > flexibility is not really needed since most of the work of > instantiation and inheritance can be done at compile or edit time, > while the interactive/run-time binding camp insists that this is not > sufficient. Strongly agree with this. > while it is true that having an editor write interface and > implementation code effectively provides the necessary mechanisms of > inheritance, this ignores the fact that the programmer is left with a > LARGE mass of code to work with. anyone who has tried to write a new > x-window widget that is only a trivial specialization of an old one > can tell you that there is a HUGE amount of code that you have to > learn to ignore. this overhead should be hidden from the programmer > to simplify the reading of the code (even after you have learned to > ignore huge chunks of stuff), and also to prevent inadvertent > modification. Counterpoints would be: - If an object has vast numbers of operations available upon it, then either this correctly represents the actual complexity of the object's definition (in which case they should all be directly visible), or the object is defined in an overly complex manner and should be simplified. - Why would there be a problem with accidental modification? If you modify something, you must as a matter of course know precisely what you are doing and why. (No?) > secondly, having an inheritance mechanism built into the language will > likely have the secondary consequence that the mechanism will be built > into the debugges for the language. trying to debug an object > oriented program without support from the debugger is similar to > trying to use a debugger which does not support structure display. a > debugger must avoid presenting the programmer with excessive detail as > a default. Where would the excessive detail arise? Generally, you suspect a particular procedure, and use the debugger to "step into" it directly. % Regarding 2) and 3), I don't perceive any need for run-time binding. % Even if we were to define classes of types (characterized by the % operations which it has available) and allow procedure parameters % to be described in terms of such classes, this still would simply % amount to a more readable version of "this procedure is generic in % the type of this parameter; auto-instantiate as necessary after % determining at compile time the type of the actual parameter". % % you can't be seriously saying the the equivalent of c++ virtual % functions is not needed. it is crucial that you be able to pass % around an object which is declared to be a relatively general object % but which is in fact some unknown (to the code at run-time) % specialization. % % if you aren't saying this, then you must be saying that the extent of % genericity of a class can be determined at compile time. % % if so, consider a program that reads a textual description of objects % to be instantiated. this is a simplification of what an interpreter % for an object oriented language must do. % % how in a completely compile oriented environment can you provide % consistent instantiation mechanism in this case? must you resort to % the fabrication of an entire run-time class inheritance scheme from % whole cloth? Well, technically it is possible to do instantiation at run time if necessary, although we compile-oriented types vastly prefer to pay the price only once at compile time if at all possible. Briefly, you set up a local block, and instantiate inside it: declare package Whatever is new Something (A, B, C...); begin -- make use of your newly instantiated package... end; The generic parameters A, B, C... are determined at run time; for example, they could be parameters to a surrounding procedure. Clearly this can lead to bug situations which will be extremely difficult to track down (since they only arise in some particular, possibly complex, run-time situation) if we are not very, very careful. It's considered a risky situation, to be avoided like the plague unless absolutely necessary; when it is absolutely necessary, one must proceed with extreme caution. Bill Wolfe, wtwolfe@hubcap.clemson.edu
billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (06/20/89)
From johnson@p.cs.uiuc.edu: > > [re: automatic generation of a higher level of abstraction] > > This is called delegation, which is a common way of implementing > inheritance in object-oriented languages. One problem is that > if you inherit procedure X, which calls procedure Y, but redefine > Y then the inherited version of X will call the wrong version of Y. If Y is redefined, then the implementation of X will become obsolete, and one can then take whatever steps are deemed necessary. One could have the tool copy code as an option, of course, which in all probability would prove quite useful when it came time to reimplement more efficiently. One of the reasons I'm not terribly fond of inheritance is that it doesn't correspond to the "natural" way in which reasoning about objects typically proceeds. We did not first conceive of the general concept of fruit; rather, we discovered one particular fruit, then another, then another, and eventually invented fruit as a general concept. Similarly, one first invents one ADT, then another, then another, and eventually a pattern begins to emerge which seems to be worthy of its own name. Thus, I think it more appropriate to simply define abstract classes of types, characterized by the operations applicable to any type in that class. From apc@cbnews.ATT.COM (Alan P. Curtis): > I think the point of truely reusable components is that you cannot, and > should not diddle! [...] > > Do you diddle with quad nand gates? > Do you diddle with the internals of a 68k? > > I didn't think so. Exactly. The objective, I think, is to come up with catalogs of reuseable software components just as electrical engineers have catalogs of reuseable hardware components. And for each component, a separate listing of different implementations -- the software implementations varying according to whether one operation is O (log n) at the expense of another which consequently drops to O (n), etc., and the hardware implementations varying according to temperature ranges which will be tolerated, etc.; in this way, we hope to achieve similar levels of productivity in *software* engineering. As I see it, the Big Problems are: - Capturing the domain knowledge in the first place and encoding the functional descriptions into appropriate component interfaces. - Enforcing standards of component quality: complete and precise specifications along with readable abstracts, and verification of component correctness even under externally parallel conditions. Only when we have achieved all this for a reasonably large number of domains will we be in a position to claim that software engineering is not a contradiction in terms, which (justifiably) is the current position of many in the less-recently-created engineering professions. Another extremely important problem is: - Exploiting internally parallel processing to achieve very high-speed implementations which externally can be treated as if they were sequential implementations which simply happen to run VERY fast. Parallel processing is a complex area, and very error-prone; in my view, one of the best possible ways to manage this complexity is to hide that complexity within a component specification which can be used as if it were nothing more than a simple sequential component. At no cost in terms of extra programming time, our applications will be able to achieve the high responsiveness which can only be achieved through parallel processing, certainly a major breakthrough in terms of software technology. On top of this would be the possibility of advanced compiler technology which might be able to parallelize even the sequential code lying on top of the already-parallel components. This of course will interact with the corresponding developments in computer networking (e.g., the NECTAR backplane, which enables many systems to transfer data across large distances at nearly the speed of light) and network operating systems (process migration technology) to achieve nearly optimal utilization. From: adamsf@cs.rpi.edu (Frank Adams) >> [A] short little program can be written to ... add a layer of abstraction; > > This is just inheritance via another name. Isn't it better to have this as > a language feature instead of an external hack? No, because the automatically generated code will eventually be rewritten, and there needs to be the separation between specification and implementation so that the rewrite will not obsolete lots of code. Another negative aspect is that inheritance tends to focus attention on the underlying layer of abstraction, instead of the current layer of abstraction. With calls to an underlying abstraction, the focus on how to implement the current abstraction is maintained. From Ralph Johnson: >> I don't perceive any need for run-time binding. > > Consider a windowing system. A big window can contain smaller windows. > Each window can display itself. A large window displays itself by displaying > its smaller windows. It would have an array of subwindows, and would > iterate over them, performing the DISPLAY operation on each one. However, > since different subwindows display different kinds of things, there must > be run-time binding. I think not; the "large window" can simply rendezvous with each of its subwindows (an array of tasks) in order to notify them of the redisplay requirement. In turn, the subwindows then rendezvous with application- specific tasks as necessary. I've done some work with the Telesoft Sunview <-> Ada binding, which approaches it by having you instantiate a generic with the user-supplied procedure to be called whenever the window needs to consult the application as to how to handle specific user input, but I think a tasking approach would provide a much cleaner solution. Bill Wolfe, wtwolfe@hubcap.clemson.edu
johnson@p.cs.uiuc.edu (06/20/89)
> Bill Wolfe, wtwolfe@hubcap.clemson.edu > One of the reasons I'm not terribly fond of inheritance is that it doesn't > correspond to the "natural" way in which reasoning about objects typically > proceeds. We did not first conceive of the general concept of fruit; > rather, we discovered one particular fruit, then another, then another, > and eventually invented fruit as a general concept. Similarly, one first > invents one ADT, then another, then another, and eventually a pattern > begins to emerge which seems to be worthy of its own name. Thus, I > think it more appropriate to simply define abstract classes of types, > characterized by the operations applicable to any type in that class. This is true. However, it is no reason to dislike inheritance. An abstract class is NOT just a signature, it also includes procedures that use the undefined operations. Thus, a subclass only has to implement a few procedures and then it can use a large number of inherited ones. Abstractions are hard to find but very valuable. It is true that people first generalize from the concrete, but once you have learned an abstraction then you can generate more concrete instances of it. One of the things about inheritance is that it makes it easier to find patterns. Subclasses naturally have the same interface as superclasses. Thus, even when you are not worrying about abstractions, and are just trying to reuse code, you are setting the stage for later reflection on how to clean up the design and find reusable abstractions. In general, designs are a lot more reusable than code. Of course, most reusable designs exist only in the heads of programmers. One of the problems in increasing software reuse is how to represent designs so that they can be shared. Object-oriented programming offers an approach to doing this. Abstract classes are a good way of describing the design of a data structure. Frameworks describe how a set of classes are used together to make an application. Ralph Johnson
johnson@p.cs.uiuc.edu (06/21/89)
>> This is just inheritance via another name. Isn't it better to have this as >> a language feature instead of an external hack? > No, because the automatically generated code will eventually be > rewritten, and there needs to be the separation between specification > and implementation so that the rewrite will not obsolete lots of code. > Sometimes subclasses are rewritten, and sometimes they are not. A subclass is more likely to be rewritten in the early stages of a project, while the abstractions are still being discovered. Later on, once the class library becomes mature and application programmers are using it instead of the developer and his/her friends, few classes are rewritten. > Another negative aspect is that inheritance tends to focus attention > on the underlying layer of abstraction, instead of the current layer > of abstraction. With calls to an underlying abstraction, the focus > on how to implement the current abstraction is maintained. The way inheritance is introduced in papers, the underlying layer of abstraction, i.e. the abstraction represented by the superclass, encompasses the current abstraction. Further, the superclass directs your focus to the procedures that you need to implement. Thus, I don't think that this is really a problem. Ralph Johnson
ted@nmsu.edu (Ted Dunning) (07/22/89)
In article <130200005@p.cs.uiuc.edu> johnson@p.cs.uiuc.edu writes:
You could implement an object-oriented Ada with a preprocessor by
rewriting late-bound procedure calls into case statements. In this case,
your preprocessor would be automatically maintaining the case statements.
Ralph Johnson
but doesn't this get directly into the are of language extension?
doesn't this violate the `ada is revealed truth' axiom?
adamsf@cs.rpi.edu (Frank Adams) (07/22/89)
In article <5750@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes: >From simpson@poseidon.uucp (Scott Simpson): > [A] short little program can be written to ... add a layer of abstraction; > simply have a program ask you for the name of the new data type(s) and > automatically generate a higher-level package which implements itself > through calls to the lower-level packages. This is just inheritance via another name. Isn't it better to have this as a language feature instead of an external hack? Frank Adams adamsf@cs.rpi.edu
johnson@p.cs.uiuc.edu (07/23/89)
Bill Wolfe is right -- I believe the following two things: 1) It is fairly unlikely that any particular component will fit the application developer's requirements, hence they must be rapidly modifiable as with inheritance. For some reason, using a tool which automatically constructs a higher level of abstraction (for appropriate modification) isn't appropriate, even though this also isolates the specification from the implementation. 2) For some reason, run-time binding is necessary. As far as the third goes, I think that diddling with old code is a necessary evil, but should be avoided where possible. We need to make reuse systems that let us leave old code along. > ... In general, > either a component is readily available (mostly the case), or it > gets invented on the spot. That's because you have no alternative. Inheritance makes a new alternative, invent a component by reusing most of the design of an existing one. I find that user interface tool-kits provide 95% of the components that I need. In a small program, they might provide everything I need. However, in a large program most of my time goes to building the 5% of the components that don't exist. Inheritance makes that 5% easier. > Furthermore, in the very unlikely event > that I need to slightly modify an existing ADT, a tool can be used to > generate another level of abstraction and proceed from there. As I said before, I think that this is a good idea. It is essentially inheritance. > I don't perceive any need for run-time binding. Consider a windowing system. A big window can contain smaller windows. Each window can display itself. A large window displays itself by displaying its smaller windows. It would have an array of subwindows, and would iterate over them, performing the DISPLAY operation on each one. However, since different subwindows display different kinds of things, there must be run-time binding. In a conventional language this would be implemented by case statements. However, case statements result in code that is much less reusable, since adding a new kind of window requires adding a new case to a whole bunch of case statements. You could implement an object-oriented Ada with a preprocessor by rewriting late-bound procedure calls into case statements. In this case, your preprocessor would be automatically maintaining the case statements. Ralph Johnson
jonasn@ttds.UUCP (Jonas Nygren) (08/13/89)
In article <130200005@p.cs.uiuc.edu> johnson@p.cs.uiuc.edu writes: > >Bill Wolfe is right -- I believe the following two things: <deleted> > 2) For some reason, run-time binding is necessary. > <deleted> >Consider a windowing system. A big window can contain smaller windows. >Each window can display itself. A large window displays itself by displaying >its smaller windows. It would have an array of subwindows, and would >iterate over them, performing the DISPLAY operation on each one. However, >since different subwindows display different kinds of things, there must >be run-time binding. In a conventional language this would be implemented >by case statements. However, case statements result in code that is much If C is to be considered a conventional language then there is a way to avoid the case-construct. Just declare a structure as follows: typedef struct{ .... void (*display)(); } window; and then the following code would achieve what Ralph wants: window a, b; .... a.display = draw_small_window; b.display = draw_big_window; .... (*a.display)(a); (*b.display)(b); without case-constructs. It's even possible to achieve single-inheritance by adding the derived class's members after the base-class members in a new struct, and this could be semi-automated by nested include-files. From my point of view run-time binding is not necessary at all! >Ralph Johnson Jonas Nygren
jos@cs.vu.nl (Jos Warmer) (08/24/89)
jonasn@ttds.UUCP (Jonas Nygren) writes: >In article <130200005@p.cs.uiuc.edu> johnson@p.cs.uiuc.edu writes: >> 2) For some reason, run-time binding is necessary. >If C is to be considered a conventional language then there is a way to avoid >the case-construct. Just declare a structure as follows: > typedef struct{ > .... > void (*display)(); > } window; >and then the following code would achieve what Ralph wants: > window a, b; > a.display = draw_small_window; > b.display = draw_big_window; The previous two lines are just run-time binding done by hand. > (*a.display)(a); > (*b.display)(b); >without case-constructs. >It's even possible to achieve single-inheritance by adding the derived >class's members after the base-class members in a new struct, and this >could be semi-automated by nested include-files. This is just inheritance done by hand. Conclusion: Run-time binding still seems neccesary. OO languages offer this support automatically. In C you can do-it-yourself, although it looks very cumbersome to me. Jos Warmer jos@cs.vu.nl ...uunet!mcvax!cs.vu.nl!jos --
ted@nmsu.edu (Ted Dunning) (08/25/89)
In article <3053@vlot.cs.vu.nl> jos@cs.vu.nl (Jos Warmer) writes:
...
Run-time binding still seems neccesary.
OO languages offer this support automatically.
In C you can do-it-yourself, although it looks very cumbersome to me.
the best example around is the X11 toolkit. they do wonderful things
with oo programming by hand in c. unfortunately, there is a mountain
of code to plough through when you want to write or read the code for
a widget. of course you can learn to skip all the uninteresting
parts, but why is this necessary?
--
ted@nmsu.edu
Most of all, he loved the fall
when the cottonwoods leaves turned gold
and floated down the trout streams
under the clear blue windswept skies.