[comp.object] Overused metaphors - Software ICs, etc.

carroll@udel.edu (Mark Carroll <MC>) (07/08/90)

In many discussions about Object-oriented languages, and particularly
in the discussion of C++ -vs- Objective C, there seems to be a major
problem caused by rampant overuse of metaphors. We ramble onwards in
our discussion of why our favorite language is superior, and as proof
of this, we present some argument in terms of "Well, MY language is
like working at the board level, but HIS language is like the IC level."

The arguments that typically run like this are virtually useless. Software
is NOT hardware; software components are not really comparable to hardware
components; the process of developing and debugging software is not directly
comparable to the processof developing and debugging hardware. In particular,
it makes no sense to say that THIS object oriented language is equivalent to
THAT language, and then to argue the relative merits of languages based on
the hardware components we compare them to.

If you feel that language A has a superior object model, say so. And argue
it in terms of the languages and models that you're comparing. A metaphor
can be a useful tool for understanding - but not for proof, and not for
comparison.

If you think that the single inheritance tree approach of Objective-C
is superior to the forest approach of C++ because of the common root
of every object, say so. If you like the dynamic nature of Objective-C,
say so. Don't make arguments that Objective-C is using software
boards, and C++ is using chips. That doesn't really mean anything.

	<MC>

--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@dewey.udel.edu    | We'll elect a precedent to a state of mind" -Fish

rtrosper@esunix.UUCP (Robert Trosper) (08/02/90)

Ah well, what goes around comes around again. We tried using this
"Software IC" concept as a metaphor for a CAD system implemented in
Mainsail back in '85 or '86. The system is still marketed by HP,
but that's neither here nor there. In any case, each object actually
had a required number of entry points (or methods, or messages, or
whatever you like), the same for each class to take the concept
of standardization further than is usually done.

So we used this software IC analogy to try to persuade the rest of the
world that they could indeed use our system as a development base.
Just pledge on your honor to provide these entry points and your
object will plug right on in.

There is a long, tangled and not necessarily pretty history here,
but the bottom line is that the concept never sold, and not because
of technical reasons. Part of it was due to the dreaded NIH
syndrome, part of it to the non-mainstream nature of the idea,
and part of it to the lack of adequate documentation we provided
because of lack of staff and lack of committment.

The other problem is that the analogy breaks down at so many levels
that it is really more useful as a marketing bleat (sorry, Brad) than
an actual metaphor for use by techies.

F'r instance - hardware IC's are specified down to the last n'th by a well
defined set of parameters including but not limited to :

truth tables
voltage
current
number and function of pins
fan-in and out
capacitance
packaging

Let's try the analog process here - software IC's are specified by

return values for a given input
Language perhaps for voltage and current? Operating system? Any ideas?
number and function of methods (procedures)
concurrency - or re-entrance?
Does it run slower when interfaced to certain other software IC's?
Again, perhaps language - like are they in a procedure library
or a method library - C or Fortran or Cobol ....

So - let's talk a little about that interface problem BETWEEN software
IC's - out there in the hardware world the interface parameters are not
neglible but within a family of implementation there is pretty
good standardization. If you can cobble up a +5V and a +12 -12 power
supply you can build a whole bunch of stuff. And the current ranges,
and most importantly the input or output states in digital logic will
be pretty simple (high, low, perhaps high impedance, or God forbid
unknown when you don't want it). Not quite so clean to feed the
"output" of one sofware IC to another - integer, real, string, boolean
to another expecting....

The best case might be a standard set of messages, but that standard
set can grow pretty large in a hurry once you're beyond the bounds
of simple logic.

Well - not to beat a dying horse. Let the Software IC myth go quietly
into it's own good night. Useless things all go there anyway.

				Robert Trosper

cox@stpstn.UUCP (Brad Cox) (08/08/90)

In article <2088@esunix.UUCP: rtrosper@esunix.UUCP (Robert Trosper) writes:
:
:Well - not to beat a dying horse. Let the Software IC myth go quietly
:into it's own good night. Useless things all go there anyway.
:
:				Robert Trosper

My crystal ball is no better than anyone else's. But just bear in mind 
that it was the cottage industry gunsmiths, and not interchangeable parts,
that were discovered to be useless during the industrial revolution.
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

kim@spock (Kim Letkeman) (08/08/90)

In article <5436@stpstn.UUCP>, cox@stpstn.UUCP (Brad Cox) writes:
| In article <2088@esunix.UUCP: rtrosper@esunix.UUCP (Robert Trosper) writes:
| :
| :Well - not to beat a dying horse. Let the Software IC myth go quietly
| :into it's own good night. Useless things all go there anyway.
| :
| :				Robert Trosper
| 
| My crystal ball is no better than anyone else's. But just bear in mind 
| that it was the cottage industry gunsmiths, and not interchangeable parts,
| that were discovered to be useless during the industrial revolution.

Hmmm ... I would suggest that during the industrial revolution the
cottage industry gunsmiths were merely replaced by more advanced and
efficient methods, not by a plethora of interchangeable parts. 

In other words, if I want to purchase a gun today, I do not go and
purchase all of the necessary "firearm IC's" and build my own.
Instead, I go and buy either a mass produced gun, or I spend the big
bucks for a gun built by one of the remaining "cottage industry master
craftsmen of gunsmithing."

In many of these "paradigm shift" or "software IC" or "product versus
process" arguments, the basis is a flawed analogy. One simply cannot
compare the extremely complex software creation process with anything
that is manufactured in the more traditional sense.

I'm certain that OOP will make a huge dent in the way in which
software is built in the future, but I can't believe that it will
completely replace existing methods in every situation.

I am getting a bit less enthused by the "paradigm shift | software IC
| product versus process" discussion each time I read it. There have
been too many excellent postings in the last few weeks that have been
opening holes in the arguments, and flawed analogies don't plug them.

-- 
Kim Letkeman    mitel!spock!kim@uunet.uu.net

duncan@dduck.ctt.bellcore.com (Scott Duncan) (08/09/90)

In article <4078@kim> kim@spock (Kim Letkeman) writes:
>
>In many of these "paradigm shift" or "software IC" or "product versus
>process" arguments, the basis is a flawed analogy. One simply cannot
>compare the extremely complex software creation process with anything
>that is manufactured in the more traditional sense.

I fully agree since I don't believe there is ANY manufacturing in software
development.  As others have suggested in their posts back as far as June,
software development is an "engineering" activity.  It leads to creation of a
(somewhat) custom system "prototype."  In software, we can replicate the
prototype without any tooling, assembly line, etc. since it is an almost
totally non-physical thing being copied.

>I'm certain that OOP will make a huge dent in the way in which
>software is built in the future, but I can't believe that it will
>completely replace existing methods in every situation.

I think more and more traditional application (as opposed to algorithm)
development will depend on greater use of existing components.  I do not
think the software industry can survive financially without such improvements.
However, it will be in order to customize software more easily, not to create
identical copies for the mass market -- that we can do already with far less
cost than any manufactured product.

I think we need to expend more effort understanding the potential impact of OO,
etc. on design, rather than implementation, issues in software.  The one thing
about software that just about everyone understands is that it is physically
possible to do almost anything with it and that individual physical changes to
software are easily accomplished.  Hence, we find that the limitations and
constraints in software (as opposed to other engineered items) are mostly those
of human comprehension/understanding and group coordination/communication.

Many software engineering efforts in the reuse and OO arena seem aimed at
saving resources in implementation of software.  Yet the largest number of
problems in a software system seem to be traceable back to requirements and
design flaws.  I feel the impact of OO technology (and associated reuse
advantages claimed) will be in making it easier to explore design more easily
and eliminate ambiguities in requirements by allowing more iterations of the
prototyping cycle in software.

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (908-699-3910 (w)   609-737-2945 (h))

cpp@sei.cmu.edu (Charles Plinta) (08/10/90)

In article <5427@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes: 
 >Although I don't quibble with your analysis, I quail everytime I hear
 >that the software community has revved up another language standardization
 >hoo ha. We have *got* to be the only group in the universe that
 >standardizes our *processes* (languages, methodologies) but never
 >gets around to standardizing the *products* of those processes 
 >(Stacks, Queues, etc). Its as if the hardware community standardized
 >their wave solder machines expecting to avoid the need for standard 
 >bus structures.

I agree.  I'm not exactly sure why focus seems to be on standardization of
process instead of product.  It might have something to do with the
"products" being software, which, in and of itself, can be "easily"
modified.  This is quite different from products produced by other
disciplines (engines, chips and circuit boards, bridges, etc.).  But none
the less, it is no excuse.  I realize this is Comp.object, and I risk
getting shot at by making the following statements, but here goes anyway:

   
    The current craft of developing software-dependent systems will begin 
    to move towards an engineering discipline only when the craftsmen and 
    craftswomen begin to capture and quantify their products.  

    Only when their products are quantified will the steps of defining the
    process, monitoring the process, and refining the process make sense.
    Processes are based on producing products to me it doesn't make sense 
    to define a process until I know what my product will look like.

    Quantifying products will also allow companies to develop systems 
    more effectively within the context of their "product line".
    
    Also, then tools can be made to help automate the production process.  
    Look at some of the CAE tools.  They help the engineer layout a board 
    design based on the components available and the manner in which they 
    can be connected.

Also, products such as nuts and bolts (stacks and queues) are not 
the products I'm talking about.  Engineers designing a car, a bridge, 
or the next generation fighter, do not start with nuts and bolts.  They 
have higher level concepts/components available to them.  These concepts 
are present in the "standard" architecture, and this tends to be their 
starting point.  For example, cars are not designed from scratch each year.
A basic architecture is chosen, and some aspects of the car are enhanced
or changed.  

Even something as different looking as the B-2 is a variation of the standard
bomber architecture.  It still has a hydraulic system, electrical system,
etc., but they modified the design to optimize it based on the 
characteristics of the aircraft desired by the customer.
  1. minimal cross section (radar)
  2. minimal IR signature
These factors affected airframe (wings and fuselage) of the final product,
and thus the unconventional look.  These are probably just a few the
optimizations made to the design which characterize the B-2, and they are
the most obvious.

In article <4078@kim>, kim@spock (Kim Letkeman) writes:
 >In article <5436@stpstn.UUCP>, cox@stpstn.UUCP (Brad Cox) writes:
 >| In article <2088@esunix.UUCP: rtrosper@esunix.UUCP (Robert Trosper)writes:
 >| :
 >| :Well - not to beat a dying horse. Let the Software IC myth go quietly
 >| :into it's own good night. Useless things all go there anyway.
 >| :
 >| :				Robert Trosper
 >| 
 >| My crystal ball is no better than anyone else's. But just bear in mind 
 >| that it was the cottage industry gunsmiths, and not interchangeable parts,
 >| that were discovered to be useless during the industrial revolution.
 >
 >Hmmm ... I would suggest that during the industrial revolution the
 >cottage industry gunsmiths were merely replaced by more advanced and
 >efficient methods, not by a plethora of interchangeable parts. 

But the key here is that a product existed around which the process for
producing them could be defined.  I wonder if the "baseline" product
produced by the craftsmen was optimized to allow mass production and to
allow it to easily be fixed in the field (interchangeable parts).  Were these
the major design considerations for optimizing the baseline product design.
I also wonder which came first, product design or process definition.


In article <4078@kim>, kim@spock (Kim Letkeman) writes:
 >In many of these "paradigm shift" or "software IC" or "product versus
 >process" arguments, the basis is a flawed analogy. One simply cannot
 >compare the extremely complex software creation process with anything
 >that is manufactured in the more traditional sense.

I'm not totally aware of all these arguments, but I do not agree with the
statement that the software creation process or products are inherently
complex.  Using current "roll your own" or "nut and bolt" techniques, the
creation process and products are obviously going to be complex.  Can you
imagine a bridge architect, not knowing about suspension bridges, or post
and beam styles, etc., being given a pile of nuts, bolts, beams, cables, etc
and being given a set of requirements to span a valley for vehicle movement
purposes with a set of parameters describing peak and average loading, wind
factors, etc.  I can imagine the product and process used to build it would
seem complex to the "untrained" eye (i.e., anyone who had not seen such a
structure, or seen such a structure built before).  This is my approximation
of where software-dependent systems development currently stands, and it will
not get better as long as the focus is on process, managing complexity, and
"small" products.

To advance, we need to reduce complexity.  I have seen instances where 
complexity has been reduced by focusing on the patterns of entities and
activities that exist in the environment in which a system must exist.
In this manner, the system architecture can be expressed with a minimum 
number of architectural elements.  For example:
  * a house has doors, windows, rooms, roof, etc.  A specific house has a 
    specific configuration of these architectural elements.  

  * a C3 (command, control, and communications) system has 
    message translator and validators, journalors, report generators, 
    message analyzers, database managers, graphic generators, table 
    generators, etc.
      [This is based on our work examining C3 system requirements]
    A specific C3 system has a specific configuration of these 
    architectural elements based on the messages that need to be processed, 
    the specific processing (algorithms) that needs to occur to support the 
    mission, the graphic that need to be displayed to allow the user to 
    carry out the mission, etc.

To advance, we need to strive for standard products. I have seen instances
where general, customizable products have been produced that are "larger"
than stacks and queues.  For the C3 system architecture mentioned above, we
have created a model solution for the message translator and validator
architectural element, so when this architectural element appears in the
design, there is at least one implementation that can be used.  We have
experimented with other model solutions for the architectural elements
listed above, to the point that we are convinced that model solutions for
all architectural elements are feasible, and that these model solutions are
scalable or customizable for a specific C3 system.  By scalable or
customizable, I mean in the sense that a bolt is a bolt, it is a fastener,
whether it is a 1/4-20 (1/4 inch diameter and 20 threads per inch) or
5/16-32, whether it has a #1 phillips, #2 phillips, slotted, or hexagonal
head, and whether it is made of steel, aluminum, or some alloy.  
All these variations on the bolt are merely optimizations and tradeoffs made 
for strength, cost, weight, etc.
In a lager sense, a suspension bridge is a scalable model also and these
have been quantified to the point that the driving characteristics of a
suspension bridge have been isolated, the equations captured, and computer
programs written to accept the characteristics (span, avg/peak weight, wind,
etc.) and produce the design and parts lists.

Finally, only after a product architecture (C3 architecture) is quantified
can we begin to define processes for producing the product, refining the
product, and define and develop tools for automating the process.


Chuck Plinta

  [ as an aside, a picture is worth a thousand words, and I have a few that ]
  [ would make my arguments above clearer and more understandable.          ]
  [ So, when will we be able to post text and graphics?                     ]

kim@spock (Kim Letkeman) (08/10/90)

In article <8190@fy.sei.cmu.edu>, cpp@sei.cmu.edu (Charles Plinta) writes:
|
| [...]
|
| In article <4078@kim>, kim@spock (Kim Letkeman) writes:
|  >Hmmm ... I would suggest that during the industrial revolution the
|  >cottage industry gunsmiths were merely replaced by more advanced and
|  >efficient methods, not by a plethora of interchangeable parts. 
| 
| But the key here is that a product existed around which the process
| for producing them could be defined.  I wonder if the "baseline"
| product produced by the craftsmen was optimized to allow mass
| production and to allow it to easily be fixed in the field
| (interchangeable parts).  Were these the major design considerations
| for optimizing the baseline product design.  I also wonder which
| came first, product design or process definition.

It's probable that the original processes employed by gunsmiths were
optimized towards manufacturability when you consider the difficulty
of crafting anything with fire, low carbon steel, and a hammer. It's
likely that product design came first as even today it is rare for
processes to be fully defined and documented.

By the way ... the "process versus product" discussion is (IMO) as
flawed as these manufacturing analogies. We are really discussing a
naked compiler versus a full blown environment, as was Brad when he
first coined the process and product terminology (or at least when I
first saw him use it.) The connection to processes and products is
pretty weak (again, IMO.)

| In a lager sense, a suspension bridge is a scalable model also and
| these have been quantified to the point that the driving
| characteristics of a suspension bridge have been isolated, the
| equations captured, and computer programs written to accept the
| characteristics (span, avg/peak weight, wind, etc.) and produce the
| design and parts lists.

I don't think it's possible to compare the complexity of a large
software design project with that of a large bridge design project.

The above paragraph talks about a suspension bridge as a scalable
model, implying that this is one of those "product" units that could
be slotted into place to fit a certain requirement. In fact, I'm sure
that there is only so much flexibility for the architect in adapting
the common implementations to a new situation. (Lessons learned from
bridges like Galloping Gurdey.)

This is in direct contrast with the C3 example (removed for brevity)
where innumerable variations have been implemented to date, few of
which could be certified as high quality with any reliability (again
in contrast to a bridge.) How do we pick the one that is to become the
model? Unfortunately, even if we could, we'd end up with only a small
piece of the puzzle in a typical large system (another contrast.)

| Finally, only after a product architecture (C3 architecture) is
| quantified can we begin to define processes for producing the
| product, refining the product, and define and develop tools for
| automating the process.

I am a bit of a fan of the "peopleware" philosophies put forth by
DeMarco and Lister. Good people develop good systems. Bad people fail.
People in between get widely varying results in widely varying time
frames. (Extreme paraphrasing.) 

There are so many variables in software design (people, tools,
algorithms, languages, target applications, target hardware, egos,
teams, etc) that simple analogies break down quickly.

I pretty much feel that OOP will help good people go even faster and
achieve even higher quality (I'd bet on getting an order of magnitude
in cases where the people are really good and OOP really fits.) Bad
people will continue failing.  And people in between will achieve
varying results. But the overall trend will be upward and that's why
OOP is such a big deal.
-- 
Kim Letkeman    mitel!spock!kim@uunet.uu.net

chip@tct.uucp (Chip Salzenberg) (08/10/90)

According to cpp@sei.cmu.edu (Charles Plinta):
> Only when their products are quantified will the steps of defining the
> process, monitoring the process, and refining the process make sense.
> Processes are based on producing products to me it doesn't make sense 
> to define a process until I know what my product will look like.

In what useful way can a software unit be quantitatively defined?
Obviously, mere execution speed and memory usage aren't the point,
since many "efficient" programs are maintenance nightmares.

Physical objects are amenable to meaningful quantitative measurement.
However, the larger issues of "elegance" and "maintainability"
*cannot* be quantified, even for hardware, except in rough estimates.

Remember that, for all intents and purposes, *EACH* piece of software
is a one-time prototype.  It's just that we make lots and lots of
copies of them when we're done.  Therefore, trying to make analogies
with statistical values like MTBF is a futile exercise.

>To advance, we need to reduce complexity.

No argument here.

>To advance, we need to strive for standard products.

On its face, this goal seems unarguable.  But I would disagree.

Look at "standard" operating systems.  By the act of defining the
interface to an OS, you imply many things about its feature set, if
only by omission.  Yet competition will always drive vendors to add
features to their operating systems.  How can you have a "standard" OS
interface when each OS has unique features?

I think that the "standard" OS goal is a mirage, and will never be
accomplished.  I expect that the even more ambitious goal of
industry-wide "standard products" will suffer the same fate.
-- 
Chip Salzenberg at ComDev/TCT     <chip@tct.uucp>, <uunet!ateng!tct!chip>

bertrand@eiffel.UUCP (Bertrand Meyer) (08/12/90)

``Project versus product culture'': unless I am mistaken, I
originated this terminology in a short presentation I made at
a panel at a conference in New Orleans in 1989. The notion was
refined and discussed in detail in my article ``The New Culture
of Software Development: Reflections on the Practice of
Object-Oriented Design'', published in the proceedings of
TOOLS '89 (Technology of Object-Oriented Languages and Systems),
Paris, November 1989, pages 13-23.

In more recent presentations I have tended to use the term
``component'' rather than ``product'' for the second term
of the opposition, but the ideas remain the same.

-- Bertrand Meyer
bertrand@eiffel.com

lgm@cbnewsc.att.com (lawrence.g.mayka) (08/14/90)

In article <26C2AFDA.178E@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>Look at "standard" operating systems.  By the act of defining the
>interface to an OS, you imply many things about its feature set, if
>only by omission.  Yet competition will always drive vendors to add
>features to their operating systems.  How can you have a "standard" OS
>interface when each OS has unique features?

Two comments about standards:

1) Standards are, or ought to be, merely least common denominators.  A
standard that prohibits evolutionary improvement will (hopefully!) not
last long. 

2) Standards are not meant to accommodate revolutionary change.  For
example, to expect the standard for ordinary analog phones to "evolve"
into a standard for ISDN digital phones is unrealistic and in fact
undesirable.  The corollary is that one should not be surprised that a
revolutionary improvement in technology is "nonstandard" in comparison
with the old standby.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

chip@tct.uucp (Chip Salzenberg) (08/17/90)

According to lgm@cbnewsc.att.com (lawrence.g.mayka):
>1) Standards are, or ought to be, merely least common denominators.
>2) Standards are not meant to accommodate revolutionary change.

For both of these reasons, I believe that "standard components" as
described by Dr. Cox will never happen.  I'd be happy to be wrong, of
course.  But I don't think that the industry can make necessary
progress if it is based on least common denominators and if it cannot
accommodate revolutinary change.
-- 
Chip Salzenberg at Teltronics/TCT     <chip@tct.uucp>, <uunet!pdn!tct!chip>
 "Most of my code is written by myself.  That is why so little gets done."
                 -- Herman "HLLs will never fly" Rubin

lgm@cbnewsc.att.com (lawrence.g.mayka) (08/21/90)

In article <26CBEB20.5009@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>According to lgm@cbnewsc.att.com (lawrence.g.mayka):
>>1) Standards are, or ought to be, merely least common denominators.
>>2) Standards are not meant to accommodate revolutionary change.
>
>For both of these reasons, I believe that "standard components" as
>described by Dr. Cox will never happen.  I'd be happy to be wrong, of
>course.  But I don't think that the industry can make necessary
>progress if it is based on least common denominators and if it cannot
>accommodate revolutinary change.

Actually, I myself did not intend to be so pessimistic.  A standard
serves the useful purpose of minimizing *needless* incompatibilities
between products.  I was merely pointing out that standards, like pie
crusts, are "made to be broken." That is, one should expect a standard
to be eventually superseded by a more advanced and possibly
incompatible one.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.