[comp.lang.smalltalk] Software ICs

fouts@orville.nas.nasa.gov (Marty Fouts) (10/21/87)

In article <1661@ppi.UUCP> cox@ppi.UUCP (Brad Cox) writes:

>. . . But the improvement has always turned out to
>be arithmetic in impact.  The geometric improvements needed to bring our
>productivity in line with that of hardware engineers will not result from
>better programming languages, but by focusing our attention outside the
>language.  For example, by learning to program by producing and reusing
>components from large libraries of pre-tested Software-ICs. Yes, these
>libraries are hard to build, and expensive. But each well-tested,
>well-documented library component provides a geometrical improvement to the
>productivity of each of its users, and the improvement is open-ended, unlike
>the productivity enhancement of features that are hardwired into a new
>programming language.

I have three problems with this comment.  The one that bothers me the
most is the standard marketing ploy of renaming something from its
original lackluster name (library routine) to something that sounds
exciting, like "Software-IC".  Libraries have been around at least as
long as programming languages; and have contributed their share to the
productivity improvement, but they aren't the glorious path to
geometric improvement, or we would have been seeing geometric
improvements over the decades.

The second problem I have is the analogy which isn't stated here, but
is frequently drawn between "Software-IC" and hardware IC.  If you
follow the component industry at all, you know that the age of TTL
7000 series ICs has all but ended and almost all serious design now is
being done with semicustom or custom components.  You also know that
hardware designers have long bemoaned their lack of productivity,
although they refer to it as "design turn-around time", and they
haven't been seeing geometric improvements either.  Finally, you will
realize that the use of off the shelf components has alternated with
the use of special purpose design, going back at least as far as the
early sixties when the first published circuit books came out.

The third problem I have is with the loose claims of 'geometric' as
apposed to 'linear' improvements in productivity.  I've been reading
about programmer productivity for a long time, since Marvin Minsky
first claimed that advances in programming languages would do away
with the need for programmers within a decade (about thirty years ago)
through James Martin's claims to the same effect ten years ago until
now.  Nobody even knows what programmer productivity is, yet alone at
what kind of rates it has been improving at over the last three
decades.  Further, various kinds of programming have received
differing amounts of attention, and ease of accomplishing tasks in
some fields has improved greatly compared to others; for instance,
using 4GL query languages like SQL, it is now possible to
interactively ask for data in a few seconds which used to require
hours of programming plus days of backlog waiting for a programer to
become available to accomplish.

All in all, many things are important in the improvements that have
been achieved and none of them alone are going to give the ultimate
performance improvement.  Careful implementation of languages for
maximum expressiveness has improved productivity, as has understanding
the way programs should be laid out to aid understanding; but so have
faster machines, interactive operating systems, and decent debuggers.
It all needs to be worked on, and none of it is going to give us magic
productivity enhancements.

reggie@pdnbah.UUCP (George Leach) (10/22/87)

In article <3179@ames.arpa> fouts@orville.nas.nasa.gov.UUCP (Marty Fouts) writes:
>In article <1661@ppi.UUCP> cox@ppi.UUCP (Brad Cox) writes:
 
>>. . . But the improvement has always turned out to
>>be arithmetic in impact.  The geometric improvements needed to bring our
>>productivity in line with that of hardware engineers ........

      [stuff deleted]
 
>I have three problems with this comment.........
 
      [stuff deleted]
 
>The third problem I have is with the loose claims of 'geometric' as
>apposed to 'linear' improvements in productivity.  I've been reading
>about programmer productivity for a long time, .......
 
      [stuff deleted]

>All in all, many things are important in the improvements that have
>been achieved and none of them alone are going to give the ultimate
>performance improvement.  Careful implementation of languages for
>maximum expressiveness has improved productivity, as has understanding
>the way programs should be laid out to aid understanding; but so have
>faster machines, interactive operating systems, and decent debuggers.
>It all needs to be worked on, and none of it is going to give us magic
>productivity enhancements.


       At the recent OOPSLA'87 Conference in Orlando, Peter Wegner stood
up and addressed the members of one of the Panel Discussions (I forget 
which one) and voiced some of the same concerns about Object-Oriented
Programming in general.  The banquet speaker, Michael Jackson, had a
similar point of view. It seems that we are always looking for that
*magic* and want to believe that we are capable of discovering something
to better our lives.  Is OOP the answer?  Being a bit sceptic I would
tend to say NO, but I'll wait until I have some experience under my belt
first.  


        Fredrick Brooks presented an Invited Paper last year at one of the
IFIP Conferences last year on this very topic:  No Silver Bullet - Essence
and Accidents of Software Engineering.  It is a highly recommended paper 
which seems to take a down to earth view of this topic.  It was originally
published in Information Processing 86, H.J. Kugler (Ed.), Elsevier Science
Publishers B.V. (North-Holland).  However, I beleive that it was published
in IEEE Computer withing the past 4 or 5 months.


George W. Leach					Paradyne Corporation
{gatech,codas,ucf-cs}!usfvax2!pdn!reggie	Mail stop LF-207
Phone: (813) 530-2376				P.O. Box 2826
						Largo, FL  34649-2826

day@grand.UUCP (Dave Yost) (10/22/87)

In article <3179@ames.arpa> fouts@orville.nas.nasa.gov.UUCP (Marty Fouts) writes:
>The second problem I have is the analogy which isn't stated here, but
>is frequently drawn between "Software-IC" and hardware IC.
>...

Liked your comments.  My $0.02:

Standard cliche:
    Hardware advances are way ahead of software advances.

Questionable interpretation:
    Hardware logic design technology has advanced faster
    than software design technology.

Another interpretation:
    Computer performance has improved more from
    improvements in semiconductor fabrication technology
    and the cost savings acceleration that results from
    mass production than from improvements in software
    design.  By the way, it is hard to say if hardware
    logic design technology has advanced as fast as
    software design.

Of course, where would (hardware) IC design be without
design tools made of software?

In conclusion, predictions of great improvements in
software that will finally catch up with the improvements
in hardware sound hyperbolic to me.  Let's just improve
software technology and leave it at that.

 --dave yost

johnson@uiucdcsp.cs.uiuc.edu (10/27/87)

Saying that class libraries are like subroutine libraries is a gross
misstatement.  O-o programming provides the benefits of code skeletons,
reuseable abstract designs, and families of compatible components.
It is possible to simulate all these things in conventional languages,
of course, but nobody does.  That is what makes o-o programming unique.

Smalltalk programmers can be very productive when they are working in
an area for which there is a lot of prebuilt classes, such as in
user interfaces.  When they have to stop to invent new abstract classes
there productivity goes down to the level of other languages.  It is
the library of well-designed abstractions that is important, not the
language.

fouts@orville.nas.nasa.gov (Marty Fouts) (10/27/87)

In article <80500019@uiucdcsp> johnson@uiucdcsp.cs.uiuc.edu writes:
>
>Saying that class libraries are like subroutine libraries is a gross
>misstatement.  O-o programming provides the benefits of code skeletons,
>reuseable abstract designs, and families of compatible components.
>It is possible to simulate all these things in conventional languages,
>of course, but nobody does.  That is what makes o-o programming unique.
>

The aspect of reusable source code isn't usually one of the reasons
given for the "sofware IC" analogy.  Even if it were, I disagree with
your strong statement.  I frequently reuse code skeletons; especially
on the event driven model required by most window systems.  In fact, I
suspect that "nobody does" is far too strong a statement, and that
many people do in this environment.  I also use the model when doing
device drivers, and Berkeley style network deamons.  "Plagiarize from
the best" has long been my motto.

>Smalltalk programmers can be very productive when they are working in
>an area for which there is a lot of prebuilt classes, such as in
>user interfaces.  When they have to stop to invent new abstract classes
>there productivity goes down to the level of other languages.  It is
>the library of well-designed abstractions that is important, not the
>language.

Right.  And how many user interface does the programming world need (;-)
I agree that there is anecedotal evidence for some zowie performance
improvements in certain areas.  The same is true for Fourth Generation
Languages (4GL) when doing data base queries; scientific subroutine
libraries when doing tractable problems; parser generators when
doing toy compilers; etc.  In the case of Smalltalk programmers
modifying classes, it is the library that is important.  In the case
of 4GL programmers writing queries, it is.  In my usual mode, the
debugger is the biggest productivity tool.

Of course, as you have pointed out, in this case the library is
important, and that is independent of the language.  As we all know,
'object oriented' is a language feature, so it can not be responsible
for the utility of the library. . . (;-)

Depending on which part of the elephant you are standing under, you
can make a claim for some feature as the solutions to the elephant's
problems, but you can't claim to solve them all, because it's a big
elephant.

cox@ppi.UUCP (Brad Cox) (10/28/87)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In article <3179@ames.arpa>, fouts@orville.nas.nasa.gov (Marty Fouts) writes:
> In article <1661@ppi.UUCP> cox@ppi.UUCP (Brad Cox) writes:
> 
> >. . . But the improvement has always turned out to
> >be arithmetic in impact.  The geometric improvements needed to bring our
> >productivity in line with that of hardware engineers will not result from
> >better programming languages, but by focusing our attention outside the
> >language.  For example, by learning to program by producing and reusing
> >components from large libraries of pre-tested Software-ICs. Yes, these
> >libraries are hard to build, and expensive. But each well-tested,
> >well-documented library component provides a geometrical improvement to the
> >productivity of each of its users, and the improvement is open-ended, unlike
> >the productivity enhancement of features that are hardwired into a new
> >programming language.
> 
> I have three problems with this comment.  The one that bothers me the
> most is the standard marketing ploy of renaming something from its
> original lackluster name (library routine) to something that sounds
> exciting, like "Software-IC".  Libraries have been around at least as
> long as programming languages; and have contributed their share to the
> productivity improvement, but they aren't the glorious path to
> geometric improvement, or we would have been seeing geometric
> improvements over the decades.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I'd like to thank Marty for broaching a subject that is precisely at the
heart of where Objective-C departs from the more traditional languages.
The distinction seems to be difficult for many people to grasp at first,
so we coined terms like `Software-IC' and `ICpak' (library of 
Software-ICs) to highlight for them that dynamically bound encapsulation
and inheritance introduces something that is different, FUNDAMENTALLY 
different from programming as done via traditional libraries.

You'd agree that the traditional library concept does not support these
concepts, from which it follows that Software-IC is not simply a fancy name
for library.  It may be less obvious why these differences MATTER, e.g. why
dynamic binding and (to a lesser extent) inheritance, relieve some of the
technical obstacles that have prevented the library concept from introducing
geometric improvements.  I went into all this at great length in my book 
(Object-oriented Programming; An Evolutionary Approach; Addison Wesley 1986), 
but I'll summarize the argument briefly here.

Yes, libraries HAVE been around for a long time, and they have certainly 
not been the glorious path to geometric improvement. For example, many 
people have worked very hard to bring about corporate-wide `reusability' 
by collecting large libraries of functions and macros, cataloging them 
in large databases, and publishing them for reuse. The projects have 
generally failed, or at best brought about only arithmetic improvement. 
But why? Is it because the groups responsible for distributing the software,
or their clients, or their managers, were lazy or stupid? No. Was the 
software undocumented, or unreliable, or too slow? No, not usually.  Was 
it because the software was not published via a fully-integrated programming
environment with a glitzy iconic browser? No.

They failed primarily because of a ordinary technical problem (and perhaps
secondarily because of the usual religio-political issues that crop up 
around code reusability). The problem is simply that code stored in a 
conventional library is tightly coupled to the supplier's problem domain,
and its consumers could not apply it easily in their unique environment. 
In other words, the code was statically bound. Static binding turns out 
in this context to be a vice, not the univeral virtue that compiler 
developers seem to believe. Static binding produces binary files that are
tightly coupled to that which was known when the code was compiled by the
code's supplier, thus removing ability from the code's consumer to install
it in his radically different execution environment.  Late binding relieves
this restriction by loosening the coupling between a supplier's reusable
code and the environments his consumers will apply it in. 

To state my position as concretely as possible, static binding is a tool, 
not a panacea (Ada devotes, take note!). Dynamic binding is also a tool, not 
a panacea (Smalltalk-80 devotes, take note!). Both tools are specialized for
particular kinds of problem and inappropriate for others. For example,
consider the different kinds of problems in building an automobile.
In designing the AutomobileEngine it is appropriate and useful to
state as early as design time that each EngineCylinder can contain 
only instances of class Piston, and to have this desicion strictly
enforced (strict type-checking) during the implementation phase. Static
binding is the right tool for this job. By contrast, in designing the
AutomobileTrunk, it is not desirable to make these kinds of decisions 
any earlier than when the automobile is put into service.  Dynamic binding
is a far better tool for this job, because strict typechecking is entirely
the wrong idea for loosely coupled collections like the trunk. Now
extend this example by imagine the tools a distributor of replacement
automobile parts, by analogy with our Software-IC concept. If static 
binding were the only tool available for defining replacement parts like
piston, you've got a clash between the static binding of the piston
to the cylinder and the more amorphous needs of the distribution channel
involved in putting a replacement piston into service (e.g. how to also
express CrateOfPistons, or worse, PartsInventory?). The piston designer 
could never anticipate all of the environments into which his consumers 
might want or need put his piston, and would value a late binding tool 
that would move these decisions into the hands of his consumers

So much for the contribution of dynamic binding. How about dynamic binding
as provided by C++ as opposed to Objective-C?  In one sense, the C++ 
virtual function machinery is dynamic in that the implementation is
certainly chosen at run-time based on the recipient.  But in an important
sense, the binding is static because the dispatch is based on compile-time 
knowledge of the receiver's type, at least to the extent of knowing a 
common supertype of all possible receivers. By contrast, Objective-C 
acquired from Smalltalk a different style of binding that is dynamic in
both of these senses; binding is done entirely at run-time. In the 
current implementation, this involves hashing the receiver's class (which
is stored in the receiver at run-time) with the message selector and using
that as an index into a cache of recently-used implementation addresses 
(function pointers). When the cache doesn't contain the desired 
implementation, a slower linear search mechanism kicks in to update the 
cache by consulting dispatch tables stored in each class. Please notice
that the cache is only one of many ways to implement the lookup mechanism.
A fully-indexed implementation that never invokes the linear lookup is 
quite possible (as in C++), but was not used because it imposes unbounded
space overheads that discourage aggressive use of inheritance.

A recent article described an example that is useful for pointing out
the advantages of totally dynamic binding. Building a HashTable class
requires that all HashTable members provide hash and isEqual: methods
that the HashTable needs. But how can the HashTable supplier (who doesn't
control the members' common superclass) arrange this? Since Objective-C
style binding does not require the members to have a common superclass,
one solution would be to just require each newly-written member class
to just provide the two needed methods, in which case newly-developed
classes will work correctly as members even though they have no common
superclass, but not older classes. But a better solution is possible
that automatically fixes the older classes too.  The HashTable supplier 
can provide an additional class, HashTableMember, that defines default 
semantics for only the hash and isEqual: methods (for example; unless
overridden, two objects are equal if and only if they are exactly the
same object). He can direct his consumers to encorporate this class into
every application that uses HashTable. At startup time, the class will
send itself a special message that causes HashTableMember's dispatch table
to be inserted at the front of Object's dispatch table (taking care to 
update the cache as well). Presto, ALL objects immediately recognize the 
two new methods. Similar classes could also be provided to provide 
special hash and isEqual: methods for those already-released classes that
should override the default implementation with specialized ones.

We actually solved this particular case by simply implementing hash and
isEqual: in the Object class. The method donor mechanism (also known as
poseAs:) is generally used for other problems, such as for repairing and/or
extending code that has already been released to the field. For example,
the most recent case was to extend our (already released) Object class 
with a mechanism for storing lists of those objects that depend on other 
objects so that iconic user interfaces can be automatically updated 
whenever the objects they're interfacing have changed.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> The second problem I have is the analogy which isn't stated here, but
> is frequently drawn between "Software-IC" and hardware IC.  If you
> follow the component industry at all, you know that the age of TTL
> 7000 series ICs has all but ended and almost all serious design now is
> being done with semicustom or custom components.  You also know that
> hardware designers have long bemoaned their lack of productivity,
> although they refer to it as "design turn-around time", and they
> haven't been seeing geometric improvements either.  Finally, you will
> realize that the use of off the shelf components has alternated with
> the use of special purpose design, going back at least as far as the
> early sixties when the first published circuit books came out.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you're saying that late binding is not a panacea, I agree wholeheartedly.
It is a tool; something to be picked up or laid aside according to the job
at hand. I fault Smalltalk-80 for not providing any tools for doing early
binding, and I fault Ada for not providing any tools for doing late binding.
Both C++ and Objective-C avoid this trap, although C++ provides stronger 
tools than C (and thus Objective-C) for doing early binding and Objective-C 
provides stronger tools than C++ for doing late binding.

As you pointed out, hardware designers use tools, not panaceas, and feel
free to choose the most effective tools for any job. At times, they choose
off-the-shelf components, and at other times they choose to build custom
logic.  Nonetheless, and in spite of the bemoaning on the part of hardware 
designers, it does seem that the geometric improvement has been realized, if 
not from each individual hardware designer, then certainly by the companies 
that employ them. When I was in graduate school twenty years ago, the EE
department built its own computer (the Maniac II) from discrete components.  
Then computers-on-a-chip came out and for a while it became fashionable for
deparements, and soon thereafter, individuals, to build their own computers. 
Moore's law predicts a yearly doubling of the number of components per chip.
This works out (2^20) to a million-fold improvement over these twenty years.
I sense that change of a similar magnitude has been demonstrated in computing
power delivered to the consumer, and possibly per manhour consumed in 
delivering it. I'd be grateful for any hard data to support or contradict
this conjecture. I am not aware of anything like a million-fold improvement
in software.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> The third problem I have is with the loose claims of 'geometric' as
> apposed to 'linear' improvements in productivity.  I've been reading
> about programmer productivity for a long time, since Marvin Minsky
> first claimed that advances in programming languages would do away
> with the need for programmers within a decade (about thirty years ago)
> through James Martin's claims to the same effect ten years ago until
> now.  Nobody even knows what programmer productivity is, yet alone at
> what kind of rates it has been improving at over the last three
> decades.  Further, various kinds of programming have received
> differing amounts of attention, and ease of accomplishing tasks in
> some fields has improved greatly compared to others; for instance,
> using 4GL query languages like SQL, it is now possible to
> interactively ask for data in a few seconds which used to require
> hours of programming plus days of backlog waiting for a programer to
> become available to accomplish.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You've alluded to several tools that have indeed contributed geometric
improvements in specialized areas. I'd extend this list with my own personal
favorites, the now well-established pipes/filters concept from Unix,
program generators like yacc/lex/4GL, and the less-well-known fully-dynamic
style of binding employed in typeless languages like Smalltalk-80 and 
hybrid languages like Objective-C. I promote the latter tools more
extensively than the former, not because they're better, but because
they're less well-known.

Regarding the word productivity, if you can offer a precise definition, 
we'd all be glad to use it. But that won't change the urgency of people's
need to change it, or to discuss it, any more than other imprecisely
defined terms like `the trade deficit', `the stock market', or `Company
X's image in the marketplace'.

> All in all, many things are important in the improvements that have
> been achieved and none of them alone are going to give the ultimate
> performance improvement.  Careful implementation of languages for
> maximum expressiveness has improved productivity, as has understanding
> the way programs should be laid out to aid understanding; but so have
> faster machines, interactive operating systems, and decent debuggers.
> It all needs to be worked on, and none of it is going to give us magic
> productivity enhancements.

Who said magic? I said tools, not panaceas, and geometric improvements,
not magic. I do believe that the proper use of all available tools CAN
move programmer productivity from an arithmetic to a geometric growth curve.

johnson@uiucdcsp.UUCP (10/28/87)

/* Written 11:33 am  Oct 27, 1987 by fouts@orville.nas.nasa.gov in uiucdcsp:comp.lang.smalltalk */
...
"Plagiarize from the best" has long been my motto.
...
As we all know, 'object oriented' is a language feature, so it can not 
be responsible for the utility of the library. . . (;-)
...
/* End of text from uiucdcsp:comp.lang.smalltalk */

While I strongly agree with the first statement, I disagree with the second.
O-o programming is as much a programming style as a language feature.
When you consider all the people who have build o-o programming systems
using a preprocessor for C, it is clear that o-o programming does not
require much language support.

The problem is knowing what to plagiarize from.  Soon user interface
systems will settle down and we will move on to some other area for which
to build frameworks for design.  For example, I am involved in an operating
system with an object-oriented design.  If our design is successful, we
should be able to easily build customized operating systems.

Many application areas could benefit tremendously from an object-oriented
framework.  For example, AT&T has thousands of programmers building
switching systems.  One of the things that makes the problem so difficult
is that every delivered system is a little different.  On the other hand,
all the systems are more-or-less the same.  This is strong evidence that
an object-oriented framework for switching systems could help a great deal.

fouts@orville.nas.nasa.gov (Marty Fouts) (11/07/87)

Dynamic binding / Late binding has been around for as long as
programming languages;  (Well, at least as long as Lisp.)  There really
isn't any need, other than marketing hype to invent a new name for it.
You don't really achieve anything except confusion by picking a catchy
name for a technical concept.  Dyanamic libraries have also been
around as long as dynamic bindings, and have been sited as one of the
indirect causes for the lack of acceptance of Lisp by programmers
outside of the A/I community.  By being easy to modify, partially
because of the ability to dynamically bind functions, Lisp became a
true Babel of dialects, making code sharing more difficult, not less,
until the recent effort at Common Lisp.  (Thank you Guy Steele et.
al.) 

I agree that if you include dyanamic binding as one of your conditions
for calling something a software-IC, than you have limited the class
of libraries you are discussing, but I strongly disagree with
obscuring the difference by giving it a marketing name; and I doubt
that it is a fundemental difference.  Dynamic binding is a function of
run time environment implementation which is shared by Basic, Lisp,
and most object oriented languages.  Programming in Basic certainly
isn't going to make me more efficent (;-)

You seem to confuse several important topics in programming language
design, and appear to be using one feature to answer a problem in a
different  area of program design.  For example, late binding does not
solve the issue of problem domain by itself.  As a user of a matrix
library, I don't care if I bind at compile time or execution time to
your matrix multiply routine, I care if it multiplies matrices stored
in the data structures I am using.  You can make me explicitly
remember that MXMTRI multiplies triangular matrices and that MXMSQ
multiplies rectangular matrices and make my life a little harder, or
you can use type overloading and a generic package to let me just use
MXM.  As long as you have supplied an algorithm to multiply the
matrices in the form I store them in, it doesn't matter if this second
mechanism is done statically, as in ADA or dynamically, as in
SmallTalk; what matters is that you implemented it.  The problem of
problem domain is to supply a sufficently rich library of routines to
meet my application needs in a way which allows me to easily access
the correct routine.  Either a correctly implemented Ada package or
SmallTalk type hierarchy would work.

The example of the automobile design is one of programming
methodology.  Static language proponents claim that this same early
design/late design concept can be implemented using stepwise
refinement and rapid prototyping, while dynamic language proponents
claim that it can be accomplished using dynamic stepwise refinement
and dynamic rapid prototyping.  I've had about equal success with
either approach, one being better for some problems the other for
different problems.

I would like to take a moment to clear up an ambiguity about hardware.
My point about hardware design wasn't about the power of the hardware
being designed, but about the design process, which is what we are
discussing.  Almost all of the performance improvement in computers
has come as a result of changes in the realization media which allow
for smaller component areas and faster clock speeds.  It still takes
as long to design a computer of a particular level of complexity as it
ever has. (BTW Moore's law, as you quote it is wrong.  Twenty 
years ago, typical chips had a few hundred gates, now they have tens of
thousands.  This is a 100 fold improvement, not a million fold.)

My point about productivity wasn't that you hadn't defined it and I
could, but rather that it is meaningless to talk about geometric
versus arithmatic growth of something which can't (or at least hasn't)
been quantified.

I'm the one who said magic, and I will continue to say magic, when I
hear geometric productivity improvements.  I agree that language
design features, such as generic packages, operator overloading,
dyanmic binding, are useful and belong in a good programmers tool kit,
along with large libraries of reusable code, incremental compilers,
source level debugs, and source code management tools.  I just try to
point out that these features have been around for a long time, have
been used extensively in some areas, and haven't lived up to
"geometric productivity improvments".

mitsu@well.UUCP (Mitsuharu Hadeishi) (11/08/87)

	Before I begin, I'd like to extend kudos to Brad Cox for so
clearly expressing the advantages of object-oriented programming and in
particular the distinction between early and late binding and the advantages/
disadvantages of both.  Now, for some comments . . .

In article <1662@ppi.UUCP> cox@ppi.UUCP (Brad Cox) writes:
>> Libraries have been around at least as
>> long as programming languages; and have contributed their share to the
>> productivity improvement, but they aren't the glorious path to
>> geometric improvement, or we would have been seeing geometric
>> improvements over the decades.
>^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
. . .
>
>Yes, libraries HAVE been around for a long time, and they have certainly 
>not been the glorious path to geometric improvement. For example, many 
>people have worked very hard to bring about corporate-wide `reusability' 
>by collecting large libraries of functions and macros, cataloging them 
>in large databases, and publishing them for reuse. The projects have 
>generally failed, or at best brought about only arithmetic improvement. 
. . .
>They failed primarily because of a ordinary technical problem (and perhaps
>secondarily because of the usual religio-political issues that crop up 
>around code reusability). The problem is simply that code stored in a 
>conventional library is tightly coupled to the supplier's problem domain,
>and its consumers could not apply it easily in their unique environment. 
>In other words, the code is statically bound.

	I would like to add, it is also because of the lack of object-
oriented design of the libraries themselves.  When someone writes code
that depends on the specific internal representation of an object, rather
than its interface, it makes it difficult if not impossible to change those
libraries or to improve their implementation.  In addition, if the libraries
are coupled to a particular problem domain, they are in general unusable for
other, even closely similar problems.

	One of the advantages you get in a language like C++ is the ability
to specify an interface to a class which cannot be broached by casual clients.
You may access those members of a class which are public, but you cannot
access private methods (subroutines for the implementation) or directly change
data members (instance variables).  You can then change the implementation
radically, in such a way that if you were to write such a class in C
it would require changing the way you called the various method functions.
C++ provides you with the ability to maximize performance via the use of
inline methods.  You may freely change which methods are inline and which are
implemented in object code, without changing the source code of the client
at all; something which would be impossible in C.  In addition, you have the
power to redefine operators to improve code readability without adding any
additional overhead.  These features (which amount to an ability to change
the nature of the compiler in a convenient, well-specified manner) also
contribute to code reusability in that you need not modify client source
just because a library has been changed to improve performance in some way.
I recently defined a variable-length string class, defined + and += as
concatenation and append operators, and wrote some source to test it out.
I then completely reimplemented it, making + be simply an inline call to +=
with the additional creation of a temporary to hold the result.  This change
would have totally changed the source, had I written it as traditional C
function calls; however, since I simple redefined the meaning of the operators
in the header file, I didn't have to change the source one bit.

	Of course, you still have to recompile, which is a disadvantage of
C++ (over Smalltalk or Objective-C).  Nonetheless for many projects
(in particular the microcomputer project we at EA normally work on) the
efficiency of C++ is well worth the cost of recompilation.  And it's a far
sight better than having to rewrite client source.

>Building a HashTable class
>requires that all HashTable members provide hash and isEqual: methods
>that the HashTable needs.
...
>At startup time, the class will
>send itself a special message that causes HashTableMember's dispatch table
>to be inserted at the front of Object's dispatch table (taking care to 
>update the cache as well). Presto, ALL objects immediately recognize the 
>two new methods.

	Of course, one could provide a similar flexibility in C++ by providing
a parent superclass for all objects (Object).  The clients would be given
a subset of new methods which could be added to the Object class; there would
be the overhead of the users of the class having to edit their definition
files for Object, or, alternatively, relinking with a new Object library
(presuming there is only one source of modifications to Object).  One may
also choose to only make some objects subclasses of Object, for example,
a simple linked list needn't be a subclass of Object, since it is
in general not placed in a collection (such as a HashTable).  Of course,
one cannot rule out such a desire, but that's the tradeoff you get with
a language such as C++.

>If you're saying that late binding is not a panacea, I agree wholeheartedly.
>It is a tool; something to be picked up or laid aside according to the job
>at hand. I fault Smalltalk-80 for not providing any tools for doing early
>binding, and I fault Ada for not providing any tools for doing late binding.
>Both C++ and Objective-C avoid this trap, although C++ provides stronger 
>tools than C (and thus Objective-C) for doing early binding and Objective-C 
>provides stronger tools than C++ for doing late binding.

	Exactly.  My colleague Rick Tiberi drew up a sheet the other day
with a visual diagram illustrating all of the languages he has used in his
career, with the metric being both the "level" of the language as well
as how far down it let you reach.  C++ stood out among all the languages
in that it (unlike Smalltalk) allows you to implement without losing
any efficiency over a traditional language, and it also allows you to go
quite far up in terms of being able to define classes with type checking,
automatic type conversion, and protected, private, and public data and
method members (allowing object-oriented programming).  In addition, you
can choose to optimize particular classes for efficiency equal to that
of C, and to generalize other classes for flexibility approaching that of
Smalltalk.  Objective-C shares a similar range, although it provides less
support at the lower end for optimizing performance while giving more
support at the higher end for flexibility.  One can achieve similar
goals, however, in the realm of code reusability, a little more
source-code dependent in the C++ case (although it does provide
object-code reusability via virtual functions), and a little less
efficiency in the Objective-C case.

			-Mitsu Hadeishi
			 CDI Research and Development
			 Electronic Arts