[comp.object] Motivation for an OO Approach

eberard@grebyn.com (Ed Berard) (04/10/90)

Folks,

What follows is an article on the motivation for considering an object-
oriented approach to software engineering. I hope you find it useful.

				-- Ed Berard
				   Berard Software Engineering, Inc.
				   18620 Mateney Road
				   Germantown, Maryland 20874
				   Phone: (301) 353-9652
				   FAX: (301) 353-9272

---- cut here ---- cut here ---- cut here ---- cut here ---- cut here ----

  Motivation for an Object-Oriented Approach to Software Engineering

			 By Edward V. Berard
		  Berard Software Engineering, Inc.

			       PROLOGUE

Periodically, someone asks for examples of "successful (or
non-successful) uses of 'fill-in-the-blank' software engineering
technology." In truth, this is a difficult, if not impossible, request
to fulfill. Why? There are several reasons:

	-	Small examples, which are easily understood, can be
		(and often are) handily dismissed as "toy" (as opposed
		to "real") applications.

	-	It is difficult to justify the cost of a "large"
		(significant) test case (e.g., [Aron, 1969] and [Baker
		and Mills, 1973]). When "fill-in-the-blank" software
		engineering technology is used on a "real" project,
		accurate and detailed records are seldom kept. Thus,
		the results are often anecdotal. Even if accurate and
		detailed records are kept, it may be difficult to make
		any meaningful comparisons, since there may be few, if
		any, statistics for other "similar" projects which did
		not use "fill-in-the-blank" technology.

	-	The results of a large-scale use of
		"fill-in-the-blank" technology are seldom, if ever,
		all positive, or all negative. This allows different
		interpretations for the same information. [One of the
		major problems is that "success" (i.e., what must be
		specifically shown to declare the technology viable)
		is seldom defined before the project begins.] The
		all-too-regrettable, and all-too-frequent,
		language/technology jihads (holy wars) often result
		from different interpretations of the same
		information.

	-	The example is for a particular application domain,
		e.g., real-time embedded systems. Those with differing
		domains (e.g., MIS) can assert that the example is
		irrelevant for their domains.

	-	In the case of a technology which may be implemented
		using a number of different programming languages, the
		number of problems increases dramatically, e.g.:

		-	Some will observe that the example uses a
			programming language which they do not,
			cannot, or will not use. Thus making the
			example worthless -- as far as they are
			concerned.

		-	Others will state that "fill-in-the-blank"
			software engineering technology "cannot
			'truly' be implemented in the programming
			language used in the example." Thus, the
			example is, for these people, a non-example.

		-	Still others will claim that the example
			merely demonstrates the power (or lack of
			power) of a particular programming language,
			and, therefore, the example cannot be used to
			justify the use (or non-use) of the general
			technology.

	-	The "metrics" used in the example may be irrelevant,
		incorrect, and/or incomplete. Even if the metrics are
		appropriate, they may not have been gathered properly.
		Further, the analysis and interpretation of the
		metrics may be faulty.

	-	The software engineers conducting a project may not be
		properly trained in "fill-in-the-blank" technology.
		This will make it difficult to assert that the
		technology was actually, or properly, used.
		Conversely, if well-trained, highly-skilled personnel
		are used, some will claim that the results are more
		attributable to choice of personnel, than to choice of
		technology.

[This last point is particularly interesting. It is well-known that
quite a large number of factors can influence the outcome of a
project. (See, e.g., [Boehm, 1981].) It is therefore not advisable for
a project to pin its hopes for success solely on the use of a
particular technology. (See [Brooks, 1987].)]

There is a question which should be asked before a test project is
begun, i.e., "why are we looking at this technology in the first
place?" It is in attempting to answer this question, that we often
uncover either the motivation to attempt the technology, or the
rationale for avoiding it.

		       TWO VIEWS OF MOTIVATION

The motivation for object-oriented technology can be found in the
answers to two questions:

	-	What is the motivation for object-oriented approaches
		in general?

	-	What is the motivation for an overall object-oriented
		approach to software engineering?

The first question focuses on the intrinsic value of object-oriented
software engineering, while the second question deals with maximizing
the benefits (and minimizing the problems) of such an approach.

Major motivations for object-oriented approaches in general are (in no
particular order):

	-	Object-oriented approaches encourage the use of
		"modern" software engineering technology.

	-	Object-oriented approaches promote and facilitate
		software reusability.

	-	Object-oriented approaches facilitate
		interoperability.

	-	When done well, object-oriented approaches produce
		solutions which closely resemble the original problem.

	-	When done well, object-oriented approaches result in
		software which is easily modified, extended, and
		maintained.

	-	There are a number of encouraging results reported
		(e.g. [Boehm-Davis and Ross, 1984]) from comparisons
		of object-oriented technology with more-commonly-used
		technologies.

The benefits of object-oriented technology are enhanced if it is
addressed early-on, and throughout the software engineering process.
Those considering object-oriented technology must assess its impact on
the entire software engineering process. Merely employing
object-oriented programming (OOP) will not yield the best results.
Software engineers, and their managers, must consider such items as
object-oriented requirements analysis (OORA), object-oriented design
(OOD), object-oriented domain analysis (OODA), object-oriented
database systems (OODBSs), and object-oriented computer aided software
engineering (OO CASE).

We differentiate between an "overall" (or consistent) approach and a
"mixed" approach. In an overall approach, a given technology is
assumed to impact everything, and the tools and processes are adjusted
accordingly. In a mixed approach, one approach (e.g., functional
decomposition) may be used for one process (e.g. requirements
analysis), and a different approach (e.g., object-oriented) may be
used for a different process (e.g., design). An overall
object-oriented approach appears to yield better results than when
object-oriented approaches are mixed with other approaches, e.g.,
functional decomposition.

Major motivations for an overall object-oriented approach to software
engineering are (in no particular order):

	-	Traceability improves if an overall object-oriented
		approach is used.

	-	There is a significant reduction in integration
		problems.

	-	The conceptual integrity of both the process and the
		product improves.

	-	The need for objectification and deobjectification is
		kept to a minimum.

	     ENCOURAGEMENT OF MODERN SOFTWARE ENGINEERING

"Modern software engineering" encompasses a multitude of concepts. We
will focus on four, i.e.:

	-	information hiding,

	-	data abstraction,

	-	encapsulation above the subprogram level, and

	-	concurrency.

Our claim will be that an object-oriented approach either forces a
software engineer to address these concepts, or makes the introduction
of the concepts, where appropriate, much easier.

Information hiding (e.g., [Parnas, 1972] and [Ross et al, 1975])
stresses that certain (inessential or unnecessary) details of an item
be made inaccessible. By providing only essential information, we
accomplish two goals:

	-	interactions among items are kept as simple as
		possible, thus reducing the chances of incorrect, or
		unintended, interactions, and

	-	we decrease the chances of unintended system
		corruption (e.g., "ripple effects") which may result
		from the introduction of changes to the hidden
		details. (See, for example, the discussion of
		"nearly-decomposable systems" in [Simon, 1981].)

Objects are "black boxes." Specifically, the details of the underlying
implementation of an object are hidden to the users (consumers) of an
object, and all interactions take place through a well-defined
interface. Consider a bank account object. Bank customers may know
that they can open an account, make deposits and withdrawals, and
inquire as to the present balance of the account. Further, they should
also know that they may accomplish these activities via either a "live
teller" or an automatic teller machine. However, bank customers are
not likely to be privy to the details of how each of these operations
are accomplished.

Information hiding is key to object-oriented thinking. Objects tend to
hide a greater deal of information than either functions or
collections of data. More importantly, well-designed objects embody
more complete concepts than either functions or data alone. In effect,
you could say that the "black boxes" which result from object-oriented
approaches to software engineering are "blacker" (i.e., they usually
hide more information) than either functions or data.

Abstraction is the process of suppressing, or ignoring, inessential
details while focusing on the important, or essential, details. We
often speak of "levels of abstraction." As we move to "higher" levels
of abstraction, we shift our attention to the larger, and "more
important," aspects of an item, e.g., "the very essence of the item,"
or "the definitive characteristics of the item." As we move to "lower"
levels of abstraction we begin to pay attention to the smaller, and
"less important," details, e.g., how the item is constructed.

For example, consider an automobile. At a high level of abstraction,
the automobile is a monolithic entity, designed to transport people
and other objects from one location to another. At a lower level of
abstraction we see that the automobile is composed of an engine, a
transmission, an electrical system, and other items. At this level we
also see how these items are interconnected. At a still lower level of
abstraction, we find that the engine is made up of spark plugs,
pistons, a cam shaft, and other items.

Software engineering deals with many different types of abstraction.
Three of the most important are: functional abstraction, data
abstraction, and process abstraction. In functional abstraction, the
function performed becomes a high-level concept. While we may know a
great deal about the interface for the function, we know relatively
little about how it is accomplished. For example, given a function
which calculates the sine of an angle, we may know that the input is a
floating-point number representing the angle in radians, and that the
out put will be a floating-point number between -1.0 and +1.0
inclusive. Still, we know very little about how the sine is actually
calculated, i.e., the function is a high-level concept -- an
abstraction.

Functional abstraction is considered good because it hides unnecessary
implementation details from those who use the function. If done well,
this makes the rest of the system less susceptible to changes in the
details of the algorithm.

Data abstraction is built "on top of" functional abstraction. (See,
e.g., [Alexandridis, 1986].) Specifically, in data abstraction, the
details of the underlying implementations of both the functions and
the data are hidden from the user. A software engineer may choose to
represent some information in the form of an array or a record, for
example. If he or she makes this representation directly visible to
those who need the information, then the rest of the system is overly
sensitive to changes in the form of the information. However, the
software engineer may choose to hide the underlying implementation of
the information, and permit access to the information only through a
series of operations. In effect the information (data) is now an
abstraction, and is only accessible through a well-defined interface
comprised of operations (and potentially other items). These
operations themselves exhibit functional abstraction.

While many definitions of data abstraction often stop at this point
(e.g., [Liskov, 1988]), there is more to the story. Suppose, for
example, we were to implement a list using data abstraction. We might
encapsulate the underlying representation for the list and provide
access via a series of operations, e.g., add, delete, length, and
copy. This offers the benefit of making the rest of the system
relatively insensitive to changes in the underlying implementation of
the list.

Assume, however, we were also interested in having several different
lists, each containing a different class of item, e.g., a list of
names, a list of phone numbers, and a list of addresses. We may even
be interested in a list which contains a mixture of items of different
classes. In these cases, we are interested in separating "the concept
of a list" from "the structure of the items contained in the list." In
effect, we have two different forms of data abstraction:

	-	the abstraction of the underlying structure of the
		list, and

	-	the abstraction of the items contained in the list.

The first form shields the rest of the system from both changes in the
underlying structure of the data, and changes in the algorithms which
manipulate that data. The second form provides for the separation of
the concept of a list from the implementation(s) of its contents.
There is a powerful consequence of the second form: reusability.

If the underlying data structure, and the encapsulated algorithms, are
sensitive to the structure of the items contained in the list, then,
each time we wish to store a different class of items in the list, we
must reimplement both the data structure and the algorithms. If,
however, both the underlying data structure and the encapsulated
algorithms treat the items stored in the list as an abstraction, we
may reuse the "list abstraction" for a (potentially vast) number of
lists.

At the code level, objects are implemented, in part, using data
abstraction techniques. (You often hear people speak of the use of
"abstract data types" (ADTs) in object-oriented programming.) Most
so-called object-oriented programming languages (OOPLs) allow their
users to easily encapsulate both the underlying implementation of
state information, and the algorithms which manipulate that
information. The separation of an abstraction from the underlying
implementation of its state information is a more difficult issue.
(Dynamic (or late) binding (see, e.g., [Cox, 1986] and [Meyer, 1988])
is, at best, a partial answer.)

Process abstraction deals with how an object handles (or does not
handle) itself in a parallel processing environment. In sequential
processing there is only one "thread of control," i.e., one point of
execution. In parallel processing there are at least two threads of
control, i.e., two, or more, simultaneous points of execution. Imagine
a windowing application. Suppose two, or more, concurrent processes
attempted to simultaneously write to a specific window. If the window
itself had a mechanism for correctly handling this situation, and the
underlying details of this mechanism were hidden, then we could say
that the window object exhibits process abstraction. Specifically, how
the window deals with concurrent process is a high-level concept -- an
abstraction.

[One of the differences between an object-oriented system and more
conventional systems is in how they each handle concurrency. Many
conventional systems deal with concurrency by having a "master
routine" maintain order (e.g., schedule processing, prevent deadlock,
and prevent starvation). In an object-oriented concurrent system, much
of the responsibility for maintaining order is shifted to the objects
themselves, i.e., each object is responsible for its own protection in
a concurrent environment.]

Localization is the process of gathering and placing things in close
physical proximity to each other. Encapsulation is the process of
logically and/or physically packaging items so that they may be
treated as a unit. Functional decomposition approaches localize
information around functions, data-driven approaches localize
information around data, and object-oriented approaches localize
information around objects. Since encapsulation in a given system
usually reflects the localization process used, the encapsulated units
that result from a functional decomposition approach will be
functions, whereas the encapsulated units resulting from an
object-oriented approach will be objects.

In the late 1940s and early 1950s, all that was available were lines
of code. So if a system was composed of 10,000 lines of code, the
programmer had to deal with 10,000 separate pieces of information.
With the introduction of subroutines, programmers could encapsulate
two, or more, lines of code, and treat the resulting subroutines as
single units. For example, 10,000 lines of code could be grouped into
100 subroutines, with each subroutine comprising 100 lines of code.
This meant that programmers had a tool (i.e., subroutines) which
helped them manage the complexity of large systems.

Object-oriented programming introduced the concept of of classes
([Dahl and Nygaard, 1966], and later [Goldberg and Kay, 1976]),
providing programmers with a much more powerful encapsulation
mechanism than subroutines. In object-oriented approaches, a class may
be viewed as a template, a pattern, or even a "blueprint" for the
creation of objects (instances). Programmers to encapsulate many
subroutines, and other items, into still larger program units called
classes.

Consider a list class. Realizing that a list is more than just a
series of storage locations, a software engineer might design a list
class so that it encapsulated:

	-	the items actually contained in the list,

	-	other useful state information, e.g., the current
		number of items stored in the list,

	-	the operations for manipulating the list, e.g., add,
		delete, length, and copy,

	-	any list related exceptions, e.g., overflow and
		underflow, (exceptions are mechanisms whereby an
		object can actively communicate "exceptional
		conditions" to its environment), and

	-	any useful exportable (from the class) constants,
		e.g., "empty list" and the maximum allowable number of
		items the list can contain.

[Even in so-called "classless object-oriented languages" (e.g., Self
([Ungar and Smith, 1987]), the concept of encapsulation is still
crucial -- and reflects much the same level of packaging of
class-based object-oriented languages.]

In summary, we could say that objects allow us to deal with entities
which are significantly larger than subroutines -- and that this, in
turn, allows us to better manage the complexity of large systems.

Many modern software systems involve at least some level of
concurrency. Examples of concurrent systems include:

	-	an interactive MIS (management information system)
		which allows multiple, simultaneous users,

	-	a HVAC (heating, ventilation, and air conditioning)
		system which controls the environment in a building,
		in part, by simultaneously monitoring a series of
		thermostats which have been place throughout the
		building, and

	-	an air traffic control (ATC) system which must deal
		with hundreds (possibly thousands) of airplanes
		simultaneously.

In these "real life" examples, it is fairly easy to identify
concurrent objects, e.g., the users in the MIS system, the thermostats
in the HVAC system, and the airplanes in the ATC system. It can be
argued that it is easier to understand concurrency in terms of
objects, than it is in terms of functions. For example, in the
Playground system ([Fenton and Beck, 1989]), children as young as 9
and 10 years of age have constructed fairly sophisticated concurrent
systems.

[We note that much of the literature on concurrent object-oriented
systems (e.g., [ACM, 1989], [Agha, 1986], and [Yonezawa and Tokoro,
1987]) can seem fairly imposing. However, it can be demonstrated that
an object-oriented approach to concurrent systems can greatly simplify
even intensely concurrent situations, i.e., situations where
concurrency is nested within concurrency. (See, for example, the
discussions of "agents" in [Adams and Nabi, 1989] and [Fenton and
Beck, 1989].)]

	THE PROMOTION AND FACILITATION OF SOFTWARE REUSABILITY

When Doug McIlroy delivered his landmark article on software
reusability ([McIlroy, 1969]) over twenty years ago, software reuse
was not a topic which generated a great deal of interest. Since the
mid 1980s, however, software reusability has become an increasingly
"hot" topic. (See, e.g., [Biggerstaff and Perlis, 1989a], [Biggerstaff
and Perlis, 1989b], and [Freeman, 1987]. [Tracz, 1988] contains a
reuse bibliography with over 600 references.)

Software reusability is not a topic which is well-understood by the
masses. For example, many software reusability discussions incorrectly
limit the definition of software to source code and object code. Even
within the object-oriented programming community, people seem to focus
on the inheritance mechanisms of various programming languages as a
mechanism for reuse. (In many OOPLs, one of the steps in creating a
subclass is the setting up a "backwards pointer" to one, or more,
superclasses. This allows the subclass to "reuse" the code from the
superclass(es).) Although reuse via inheritance is not to be
dismissed, there are more powerful reuse mechanisms.

[One of the items which sparked the Industrial Revolution over two
centuries ago was interchangeable parts. (See, e.g., [Brinton et al,
1964].) It is worth noting that these interchangeable parts were
objects as opposed to mere functionality.]

Research into software reusability, and actual practice, have
established a definite connection between overall software engineering
approaches and software reusability. For example, analysis and design
techniques have a very large impact on the reusability of software --
a greater impact, in fact, than programming (coding) techniques. A
literature search for software engineering approaches which appear to
have a high correlation with software reusability shows a definite
relationship between object-oriented approaches and software reuse,
e.g., [Brown and Quanrud, 1988], [Carstensen, 1987], [Ledbetter and
Cox, 1985], [Meyer, 1987], [St. Dennis et al, 1986], [Safford, 1987],
[Schmucker, 1986], and [Tracz, 1987].

	  THE PROMOTION AND FACILITATION OF INTEROPERABILITY

Consider a computer network with different computer hardware and
software at each node. Next, instead of viewing each node as a
monolithic entity, consider each node to be a collection of (hardware
and software) resources. Interoperability is the degree to which an
application running on one node in the network can make use of a
(hardware or software) resource at a different node on the same
network.

For example, consider a network with a Cray supercomputer, at one
node, rapidly processing a simulation application, and needing to
display the results on a high-resolution color monitor. If the
simulation software on the Cray makes use of a color monitor on a
Macintosh IIfx at a different node on the same network, that is an
example of interoperability. Another example would be if the Macintosh
IIfx made use of a relational DBMS which was resident on a DEC VAX
elsewhere on the network.

In effect, as the degree of interoperability goes up, the concept of
the network vanishes. A user on any one node has increasingly
transparent use of any resource on the network.

There are articles which attempt to document the relationship between
an object-oriented approach and interoperability, e.g., [Anderson,
1987]. There are also systems which were constructed (to some degree
at least) in an object-oriented manner, and seem to show a connection
between object-orientation and interoperability, e.g., the X Windows
System and Sun's NeWS. Still, it seems that there should be a more
direct way to connect object-oriented approaches and interoperability.

Polymorphism is a measure of the degree of difference in how each item
in a specified collection of items must be treated at a given level of
abstraction. Polymorphism is increased when any unnecessary
differences, at any level of abstraction, within a collection of items
are eliminated. Although polymorphism is often discussed in terms of
programming languages (e.g., [Harland, 1984] and [Strachey, 1967]), it
is a concept with which we are all familiar with in everyday life.

For example, we use the verb "drive" in a polymorphic manner when we
talk about "driving a car," "driving a truck," or "driving a bus." The
concept of polymorphism is further extended when we realize that the
"driving interface" to each of these vehicles includes a steering
wheel, an accelerator pedal, a brake pedal, a speedometer, and a fuel
gauge.

Suppose we are constructing a software system which involves a
graphical user interface (GUI). Further, suppose we are using an
object-oriented approach. Three of the objects we have identified are
a file, an icon, and a window. We need an operation which will cause
each of these items to come into existence. We could provide the same
operation with a different name (e.g., "open" for the file, "build"
for the icon, and "create" for the window) for each item. Hopefully,
we will recognize that we are seeking the same general behavior for
several different objects and will assign the same name (e.g.,
"create") to each operation.

It should not go unnoticed that a polymorphic approach, when done
well, can significantly reduce the overall complexity of a system.
This is especially important in a distributed application environment.
Hence, there appears to be a very direct connection between
polymorphism and enhanced interoperability.

We can make two additional observations:

	-	Since object-oriented approaches often stress
		polymorphism, it should come as no surprise that these
		approaches also facilitate interoperability.

	-	It appears that localizing information around objects
		(as opposed to functions) encourages software
		engineers to describe the same general behavior using
		the same names.

   OBJECT-ORIENTED SOLUTIONS CLOSELY RESEMBLE THE ORIGINAL PROBLEM

One of the axioms of systems engineering is that it is a good idea to
make the solution closely resemble the original problem. One of the
ideas behind this is that, if we understand the original problem, we
will also be better able to understand our solution. For example, if
we are having difficulties with our solution, it will be easy to check
it against the original problem.

There is a great deal of evidence to suggest that it is easier for
many people to view the "real world" in terms of objects, as opposed
to functions, e.g.:

	-	many forms of knowledge representation, e.g. semantic
		networks ([Barr and Feigenbaum, 1981], page 180),
		discuss knowledge in terms of "objects,"

	-	the relative "user friendliness" of graphical user
		interfaces, and

	-	common wisdom, e.g., "a picture is worth a thousand
		words."

Unfortunately, many who have been in the software profession for more
than a few years tend to view the world almost exclusively in terms of
functions. These people often suffer from "object blindness," i.e.,
the inability to identify objects, or to view the world in terms of
interacting objects. We should point out that "function" is not a
dirty word in object-oriented software engineering. For example, it is
quite acceptable to speak of the functionality provided by an object,
or the functionality resulting from interactions among objects.

    OBJECT-ORIENTED APPROACHES RESULT IN SOFTWARE WHICH IS EASILY
		  MODIFIED, EXTENDED AND MAINTAINED

When conventional engineers (e.g., electronics engineers, mechanical
engineers, and automotive engineers) design systems they follow some
basic guidelines:

	-	They may start with the intention of designing an
		object (e.g., an embedded computer system, a bridge,
		or an automobile), or with the intention of
		accomplishing some function (e.g., guiding a missile,
		crossing a river, or transporting people from one
		location to another). Even if they begin with the idea
		of accomplishing a function, they quickly begin to
		quantify their intentions by specifying objects
		(potentially at a high level of abstraction) which
		will enable them to provide the desired functionality.
		In short order, they find themselves doing
		object-oriented decomposition, i.e., breaking the
		potential product into objects (e.g., power supplies,
		RAM, engines, transmissions, girders, and cables).

	-	They assign functionality to each of the parts
		(object-oriented components). For example, the
		function of the engine is to provide a power source
		for the movement of the automobile. Looking ahead (and
		around) to reusing the parts, the engineers may modify
		and extend the functionality of one, or more, of the
		parts.

	-	Realizing that each of the parts (objects) in their
		final product must interface with one, or more, other
		parts, they take care to create well-defined
		interfaces. Again, focusing on reusability, the
		interfaces may be modified or extended to deal with a
		wider range of applications.

	-	Once the functionality and well-defined interfaces are
		set in place, each of the parts may be either
		purchased off-the-shelf, or designed independently. In
		the case of complex, independently-designed parts, the
		engineers may repeat the above process.

Without explicitly mentioning it, we have described the information
hiding which is a normal part of conventional engineering. By
describing the functionality (of each part) as an abstraction, and by
providing well-defined interfaces, we foster information hiding.

However, there is also often a more powerful concept at work here.
Each component not only encapsulates functionality, but also knowledge
of state (even if that state is constant). This state, or the effects
of this state, are accessible via the interface of the component. For
example, a RAM chip stores and returns bits of information (through
its pins) on command.

By carefully examining the functionality of each part, and by ensuring
well-thought-out and well-defined interfaces, the engineers greatly
enhance the reusability of each part. However, they also make it
easier to modify and extend their original designs. New components can
be swapped in for old components -- provided they adhere to the
previously-defined interfaces and that the functionality of the new
component is harmonious with the rest of the system. Electronics
engineering, for example, often uses phrases such as "plug
compatibility" and "pin compatibility" to describe this phenomenon.

Conventional engineers also employ the concept of specialization.
Specialization is the process of taking a concept and modifying
(enhancing) it so that it applies to a more specific set of
circumstances, i.e., it is less general. Mechanical engineers may take
the concept of a bolt and fashion hundreds of different categories of
bolts by varying such things as the alloys used, the diameter, the
length, and the type of head. Electronics engineers create many
specialized random access memory (RAM) chips by varying such things as
the implementation technology (e.g., CMOS), the access time, the
organization of the memory, and the packaging.

By maintaining a high degree of consistency in both the interfaces and
functionality of the components, engineers can allow for
specialization while still maintaining a high degree of modifiability.
By identifying both the original concepts, and allowable (and
worthwhile) forms of specialization, engineers can construct useful
"families of components." Further, systems can be designed to readily
accommodate different family members.

Object-oriented software engineering stresses such points as:

	-	encapsulation (packaging) of functionality with
		knowledge of state information,

	-	well-defined functionality and interfaces for all
		objects within a system,

	-	information hiding -- particularly hiding the details
		of the underlying implementations of both the
		functionality and the state information, and

	-	specialization (using, for example, the concept of
		inheritance).

In a very real sense, object-oriented software engineering shares a
great deal in common with more conventional forms of engineering. The
concepts of encapsulation, well-defined functionality and interfaces,
information hiding, and specialization are key to the modification and
extension of most non-software systems. It should come as no surprise
that, if used well, they can allow for software systems which are
easily modified and extended.

		      THE GENERAL ELECTRIC STUDY

There are quite a number of "paper" methodology comparisons (i.e.,
comparisons of methodologies made without attempting to actually use
the methodologies), e.g., [Kelly, 1987] and [Loy, 1990].
(Unfortunately these papers often reveal more about the author's
misconceptions and lack of expertise than they do about the intrinsic
quality of a given methodology.) Other papers analyze small test cases
done by others, e.g., [Boyd, 1987]. Yet others relate classroom
experiences, e.g., [Jamsa, 1984].

Since the late 1960s there have been a large number of "methodology
bake offs." (A "bake off" is a quaint American version of a cooking
contest. In a "methodology bake off" the same sample problem is given
to two, or more, design teams. Each design team then produces a
"solution" to the problem using a different methodology.) These
methodology contests have been conducted by companies, local chapters
of professional organizations, and software engineering research
organizations, among others.

Beginning around 1984, object-oriented design (OOD) began to figure
prominently in these contests. OOD was pitted against other approaches
in contests run by companies, local chapters of the ACM (Association
for Computing Machinery), and the Rocky Mountain Institute of Software
Engineering, to name a few. In general, the test cases were slightly
complicated small applications, e.g., the classic "elevator scheduler
and controller" and "cruise control" problems. OOD often had favorable
results in these contests.

In 1984, Debbie Boehm-Davis and Lyle Ross conducted a study
([Boehm-Davis and Ross, 1984]) for General Electric. This study
compared several different development approaches for Ada software,
i.e., classic Structured Analysis and Structured Design (e.g.,
[Page-Jones, 1980]), Object-Oriented Design (e.g., [Booch, 1983]), and
Jackson System Development (e.g., [Jackson, 1983]). The results of
this study indicate that, when compared to the other solutions, the
object-oriented solutions:

	-	were simpler (in terms of control flow and numbers of
		operators and operands),

	-	were smaller (using lines of code as a metric),

	-	appeared to be better suited to real-time
		applications, and

	-	took less time to develop.

Keep in mind that this study, while certainly indicating the potential
of an object-oriented approach, did not prove conclusively the
superiority of an object-oriented approach.

     THE IMPACT OF OBJECT-ORIENTATION ON THE SOFTWARE LIFE-CYCLE

To help us get some perspective on object-oriented software
engineering, it is useful to note the approximate times when various
object-oriented technologies were introduced, e.g.:

	-	object-oriented programming: 1966 (with Simula ([Dahl
		and Nygaard, 1966]), although the term did not come
		into existence until around 1970 (many people credit
		Alan Kay, then at Xerox PARC, with coining the term)),

	-	object-oriented design: 1980 (via Grady Booch (e.g.,
		[Booch, 1981])),

	-	object-oriented computer hardware: 1980 (see, e.g.,
		[Organick, 1983] and [Pountain, 1988]),

	-	object-oriented databases: 1985 (See, e.g., [IEEE,
		1985] and [Zdonik and Maier, 1990]. Although, it is
		only recently that formal calls for some consistency
		(standardization) have appeared, e.g., [Atkinson et
		al, 1989]. As a side note: "semantic data modeling"
		(e.g., [Hammer and McLeod, 1981]) is often cited as
		the immediate precursor to OODBSs.),

	-	object-oriented requirements analysis: 1986 (this is
		when the first courses were available, although books
		did appear until 1988 (e.g. [Shlaer and Mellor,
		1988])), and

	-	object-oriented domain analysis: 1988 (e.g., [Cashman,
		1989] and [Shlaer and Mellor, 1989])

Originally, people though of "object-orientation" only in terms of
programming languages. Discussions were chiefly limited to
object-oriented programming (OOP). However, during the 1980s, people
found that:

	-	object-oriented programming alone was insufficient for
		large and/or critical problems, and

	-	object-oriented thinking was largely incompatible with
		traditional (e.g., functional decomposition)
		approaches -- due chiefly to the differences in
		localization.

During the 1970s and early 1980s, many people believed that the
various life-cycle phases (e.g., analysis, design, and coding) were
largely independent. Therefore, one could supposedly use very
different approaches for each phase, with only minor consequences. For
example, one could consider using structured analysis with
object-oriented design. This line of thinking however, was found to be
largely inaccurate.

Today, we know that, if we are considering an object-oriented approach
to software engineering, it is better to have an overall
object-oriented approach. There are several reasons for this.

			     TRACEABILITY

Traceability is the degree of ease with which a concept, idea, or
other item may be followed from one point in a process to either a
succeeding, or preceding, point in the same process. For example, one
may wish to trace a requirement through the software engineering
process to identify the delivered source code which specifically
addresses that requirement.

Suppose, as is often the case, that you are given a set of functional
requirements, and you desire (or are told) that the delivered source
code be object-oriented. During acceptance testing, your customer will
either accept or reject your product based on how closely you have
matched the original requirements. In an attempt to establish
conformance with requirements (and sometimes to ensure that no
"extraneous code" has been produced), your customer wishes to trace
each specific requirement to the specific delivered source code which
meets that requirement, and vice versa.

Unfortunately, the information contained in the requirements is
localized around functions, and the information in the delivered
source code is localized around objects. One functional requirement,
for example, may be satisfied by many different object, or a single
object may satisfy several different requirements. Experience has
shown that traceability, in situations such as this, is a very
difficult process.

There are two common solutions to this problem:

	-	transform the original set of functional requirements
		into object-oriented requirements, or

	-	request that the original requirements be furnished in
		object-oriented form.

Either of these solutions will result in the requirements information
which is localized around objects. This will greatly facilitate the
tracing of requirements to object-oriented source code, and vice
versa.

		  REDUCTION OF INTEGRATION PROBLEMS

When Grady Booch first presented his first-generation version of
object-oriented design in the early 1980s, he emphasized that it was a
"partial life-cycle methodology," i.e., it focused primarily on
software design issues, secondarily on software coding issues, and
largely ignored the rest of the life-cycle, e.g., it did not address
early life-cycle phases, such as analysis.
{One strategy which was commonly attempted was to break a large problem
into a number of large functional (i.e., localized on functionality)
pieces, and then to apply object-oriented design to each of the
pieces. The intention was to integrate these pieces at a later point
in the life-cycle, i.e., shortly before delivery. This process was not
very successful. In fact, it resulted in large problems which became
visible very late in the development part of the software life-cycle,
i.e., during "test and integration."

As you might have guessed, the problem was again based on differing
localization criteria. Suppose, for example, a large problem is
functionally decomposed into four large functional partitions. Each
partition is assigned to a different team, and each team attempts to
apply an object-oriented approach to the design of their functional
piece. All appears to be going well -- until it is time to integrate
the functional pieces. When the pieces attempt to communicate, they
find many cases where each group has implemented "the same object" in
a different manner.

What has happened? Let us assume, for example, that the first, third,
and fourth groups all have identified a common object. Let's call this
object X. Further, let us assume that each team identifies and
implements object X solely on the information contained in their
respective functional partition. The first group identifies and
implements object X as having attributes A, B, and D. The third group
identifies and implements object X as having attributes C, D, and E.
The fourth group identifies and implements object X as having only
attribute A. Each group, therefore, has an incomplete picture of
object X.

This problem may be made worse by the fact that each team may have
allowed the incomplete definitions of one, or more, objects to
influence their designs of both their functional partition, and the
objects contained therein.

This problem could have been greatly reduced by surveying to the
original unpartitioned set of functional requirements, and identifying
both candidate objects and their characteristics. Further, the
original system should have been re-partitioned along object-oriented
lines, i.e., the software engineers should be using object-oriented
decomposition. This knowledge should be carried forward to the design
process as well.

		 IMPROVEMENT IN CONCEPTUAL INTEGRITY

In his book, The Mythical Man Month ([Brooks, 1975]), Fred Brooks, Jr.
stresses the value of conceptual integrity. Conceptual integrity means
being true to a concept, or, more simply, being consistent.
Consistency helps to reduce complexity, and, hence, increases
reliability. If a significant change in the localization strategy is
made during the life-cycle of a software product, the concept of
conceptual integrity is violated, and the potential for the
introduction of errors is very high.

During the development part of the life-cycle, we should strive for an
overall object-oriented approach. In this type of approach, each
methodology, tool, documentation technique, management practice, and
software engineering activity is either object-oriented or supportive
of an object-oriented approach. By using an overall object-oriented
approach (as opposed to a "mixed localization" approach), we should be
able to eliminate a significant source of errors.

[During the maintenance phase of the software life-cycle, we have a
different set of issues and problems. If we must continue to deal with
a significant amount of pre-existing, non-object-oriented software
(i.e., a "legacy" -- see, e.g., [Dietrich et al, 1989]), there are a
number of techniques we can employ. However, we will still face the
problem of attempting to minimize errors which are introduced as a
result of shifting localization strategies.]

     LESSENING THE NEED FOR OBJECTIFICATION AND DEOBJECTIFICATION

Objects are not data. Data are not objects. Objects are not merely
data and functions encapsulated in the same place. However, each
object-oriented application must interface with (at least some)
non-object-oriented systems, i.e., systems which do not recognize
objects. Two of the most common examples are:

	-	When objects must be persistent, e.g., when objects
		must persist beyond the invocation of the current
		application. Although an object-oriented data base
		management system (OODBMS) is called for, a
		satisfactory one may not be available. Conventional
		relational DBMSs, while they may recognize some state
		information, do not recognize objects. Therefore, if
		we desire to store an object in a non-OODBMS, we must
		transform the object into something which can be
		recognized by the non-OODBMS. When we wish to retrieve
		a stored object, we will reverse the process.

	-	In a distributed application, where objects must be
		transmitted from one node in the network to another
		node in the same network. Networking hardware and
		software is usually not object-oriented. Hence, the
		transmission process requires that we have some way of
		reducing an object to some primitive form
		(recognizable by the network), transmitting the
		primitive form, and reconstituting the object at the
		destination node.

Deobjectification is the process of reducing an object to a form which
can be dealt with by a non-object-oriented system. Objectification is
the process of (re)constituting an object from some more primitive
form of information. Each of these processes, while necessary, has a
significant potential for the introduction of errors. Our goal should
be to minimize the need for these processes. An overall
object-oriented approach can help to keep the need for objectification
and deobjectification to a minimum.

[In truth, this is not a problem which is unique to object-oriented
systems. Anytime we interface two, or more, separately developed
systems, we should anticipate some conversion activity.]

			      CONCLUSION

We have touched on some of the motivations for an object-oriented
approach in general. Further, we have stressed the benefits of an
overall object-oriented approach. The intention was not to prove
conclusively that an object-oriented approach is superior to any other
approach, but rather to suggest reasons for considering such an
approach.

			     BIBLIOGRAPHY

[ACM, 1989]. Association for Computing Machinery, Special Issue of
SIGPLAN Notices: Proceedings of the ACM SIGPLAN Workshop on
Object-Based Concurrent Programming, Vol. 24, No. 4, April 1989.

[Adams and Nabi, 1989]. S.S. Adams and A.K. Nabi, "Neural Agents -- A
Frame of Mind," OOPSLA '89 Conference Proceedings, Special Issue of
SIGPLAN Notices, Vol. 24, No. 10, October 1989, pp. 139 - 150.

[Agha, 1986]. G. Agha, ACTORS, A Model of Concurrent Computation in
Distributed Systems, MIT Press, Cambridge, Massachusetts, 1986.

[Alexandridis, 1986]. N.A. Alexandridis, "Adaptable Software and
Hardware : Problems and Solutions," Computer , Vol. 19, No. 2,
February 1986, pp. 29 - 39.

[Anderson, 1987]. J. Anderson, "Achieving Interoperability: Myth and
Reality," Infosystems, Vol. 34, No. 7, July 1987, page 56.

[Aron, 1969]. J.D. Aron, "The Superprogrammer Project," reprinted in
Classics in Software Engineering, E.N. Yourdon, Editor, Yourdon Press,
New York, New York, 1979, pp. 37 - 39.

[Atkinson et al, 1989]. M. Atkinson, F. Bancilhon, D. DeWitt, K.
Dittrich, D. Maier, and S. Zdonik, "The Object-Oriented Database
System Manifesto," (Invited Paper), Proceedings of the First
International Conference on Deductive and Object-Oriented Databases,
Kyoto, Japan, December 4-6, 1989, pp. 40 - 57.

[Baker and Mills, 1973]. F.T. Baker and H.D. Mills, "Chief Programmer
Teams," Datamation, Vol. 19, No. 12, December 1973, pp. 58 - 61.

[Barr and Feigenbaum, 1981]. A. Barr and E.A. Feigenbaum, Editors The
Handbook of Artificial Intelligence, Volume 1, HeurisTech Press,
Stanford, California, 1981.

[Biggerstaff and Perlis, 1989a]. T.J. Biggerstaff and A.J. Perlis,
Editors, Software Reusability, Volume 1: Concepts and Models,
Addison-Wesley Publishing Company, New York, New York, 1989.

[Biggerstaff and Perlis, 1989b]. T.J. Biggerstaff and A.J. Perlis,
Editors, Software Reusability, Volume 2: Applications and Experience,
Addison-Wesley Publishing Company, New York, New York, 1989.

[Boehm, 1981]. B.W. Boehm, Software Engineering Economics,
Prentice-Hall, Englewood Cliffs, New Jersey, 1981.

[Boehm-Davis and Ross, 1984]. D. Boehm-Davis and L.S. Ross,
"Approaches to Structuring the Software Development Process," General
Electric Company Report Number GEC/DIS/TR-84-B1V-1, October 1984.

[Booch, 1981]. G. Booch, "Describing Software Design in Ada," SIGPLAN
Notices, Vol. 16, No. 9, September 1981, pp. 42 - 47.

[Booch, 1982]. G. Booch, "Object Oriented Design," Ada Letters, Vol.
I, No. 3, March- April 1982, pp. 64 - 76.

[Booch, 1983]. G. Booch, Software Engineering with Ada, The
Benjamin/Cummings Publishing Company, Menlo Park, California, 1983.

[Boyd, 1987]. S. Boyd, "Object-Oriented Design and PAMELA: A
Comparison of Two Design Methods for Ada," Ada Letters, Vol. 7, No. 4,
July-August 1987, pp. 68 - 78.

[Brinton et al, 1964]. C. Brinton, J.B. Christopher, and R.L. Wolff,
Civilization In the West, Prentice-Hall, Englewood Cliffs, New Jersey,
1964.

[Brooks, 1975]. F. P. Brooks, Jr., The Mythical Man-Month, Addison-Wesley Publishing Company, Reading, Massachusetts, 1975.

[Brooks, 1987]. F. P. Brooks, Jr., "No Silver Bullet: Essence and Accidents of Software Engineering," IEEE Computer, Vol. 20, No. 4, April 1987, pp. 10 - 19.

[Brown and Quanrud, 1988]. G.R. Brown and R.B. Quanrud, "The Generic
Architecture Approach to Reusable Software," Proceedings of the Sixth
National Conference on Ada Technology, March 14-18, 1988, U.S. Army
Communications-Electronics Command, Fort Monmouth, New Jersey, pp. 390
- 394.

[Carstensen, 1987]. H.B. Carstensen, "A Real Example of Reusing Ada
Software," Proceedings of the Second National Conference on Software
Reusability and Maintainability, National Institute for Software
Quality and Productivity, Washington, D.C., March 1987, pp. B-1 to
B-19.

[Cashman, 1989]. M. Cashman, "Object-Oriented Domain Analysis,"
Software Engineering Notes, Vol. 14, No. 6, October 1989, page 67.

[Coad, 1988]. P. Coad, "Object-Oriented Requirements Analysis (OORA):
A Practitioner's Crib Sheet," Proceedings of Ada Expo 1988, Galaxy
Productions, Frederick, Maryland, 1988, 9 pages.

[Coad and Yourdon, 1989]. P. Coad and E. Yourdon, OOA --
Object-Oriented Analysis, Prentice-Hall, Englewood Cliffs, New Jersey,
1989.

[Cox, 1986]. B.J. Cox, Object Oriented Programming: An Evolutionary
Approach, Addison-Wesley, Reading, Massachusetts, 1986.

[Dahl and Nygaard, 1966]. O.J. Dahl and K. Nygaard, "SIMULA -- an
ALGOL-Based Simulation Language," Communications of the ACM, Vol. 9,
No. 9, September 1966, pp. 671 - 678.

[Dietrich et al, 1989]. W.C. Dietrich, Jr., L.R. Nackman, and F.
Gracer, "Saving a Legacy With Objects," OOPSLA '89 Conference
Proceedings, Special Issue of SIGPLAN Notices, Vol. 24, No. 10,
October 1989, pp. 77 - 84.

[Fenton and Beck, 1989]. J. Fenton and K. Beck, "Playground: An
Object-Oriented Simulation System With Agent Rules for Children of All
Ages," OOPSLA '89 Conference Proceedings, Special Issue of SIGPLAN
Notices, Vol. 24, No. 10, October 1989, pp. 123 - 138.

[Freeman, 1987]. P. Freeman, Editor, Tutorial: Software Reusability,
IEEE Catalog Number EH0256-8, IEEE Computer Society Press, Washington,
D.C., 1987.

[Goldberg and Kay, 1976]. A. Goldberg and A. Kay, Editors,
Smalltalk-72 Instructional Manual, Technical Report SSL-76-6, Xerox
PARC, Palo Alto, California, March 1976.

[Hammer and McLeod, 1981]. M. Hammer and D. McLeod, "Database
Description with SMD: A Semantic Data Model," ACM Transactions on
Database Systems, Vol. 6, No. 3, September 1981, pp. 351 - 386.

[Harland, 1984]. D.M. Harland, Polymorphic Programming Languages --
Design and Implementation, Halstead Press, New York, New York, 1984.

[IEEE, 1985]. IEEE, Special Issue on Object-Oriented Systems, IEEE
Database Engineering, Vol. 8, No. 4, December 1985.

[Jackson, 1983]. M. A. Jackson, System Development, Prentice-Hall,
Englewood Cliffs, New Jersey, 1983.

[Jamsa, 1984]. K.A. Jamsa, "Object Oriented Design vs. Structured
Design -- A Student's Perspective," Software Engineering Notes, Vol.
9. No. 1, January 1984, pp. 43 - 49.

[Kelly, 1987]. J.C. Kelly, "A Comparison of Four Design Methods for
Real-Time Systems," Proceedings of the 9th International Conference on
Software Engineering, March 30-April 2, 1987, pp. 238 - 252.

[Ledbetter and Cox, 1985]. L. Ledbetter and B. Cox, "Software ICs,"
Byte, Vol. 10, No. 6, June 1985, pp. 307 - 315.

[Liskov, 1988]. B. Liskov, "Data Abstraction and Hierarchy," OOPSLA
'87 Addendum to the Proceedings, Special Issue of SIGPLAN Notices,
Vol. 23, No. 5, May 1988, pp. 17 - 34.

[Loy, 1990]. P.H. Loy, "Comparisons of O-O and Structured
Development," Software Engineering Notes, Vol. 15, No. 1, January
1990, pp. 44 - 48.

[McIlroy, 1969]. M.D. McIlroy, "'Mass Produced' Software Components,"
in Software Engineering: A Report On a Conference Sponsored by the
NATO Science Committee, Garmisch, Germany, October 1968, P. Naur and
B. Randell, Editors, pp. 138 - 155.

[Meyer, 1987] . Meyer, B., "Reusability: The Case for Object-Oriented
Design," IEEE Software, Vol. 4, No. 2, March 1987, pp. 50-64.

[Meyer, 1988]. B. Meyer, Object-Oriented Software Construction,
Prentice-Hall, Englewood Cliffs, New Jersey, 1988.

[Organick, 1983]. E. Organick, A Programmer's View of the Intel 432
System, McGraw-Hill, New York, New York, 1983.

[Page-Jones, 1980]. M. Page-Jones, The Practical Guide to Structured
Systems Design, Yourdon Press, New York, New York, 1980.

[Parnas, 1972]. D.L. Parnas, "On the Criteria To Be Used in
Decomposing Systems Into Modules," Communications of the ACM, Vol. 5,
No. 12, December 1972, pp. 1053-1058.

[Pountain, 1988]. D. Pountain, "Rekursiv: An Object-Oriented CPU,"
Byte, Vol. 13, No. 11, November 1988, pp. 341 - 349.

[Ross et al, 1975]. D.T. Ross, J.B. Goodenough, and C.A. Irvine,
"Software Engineering: Process, Principles, and Goals," IEEE Computer,
Vol. 8, No. 5, May 1975, pp. 17 - 27.

[St. Dennis et al, 1986]. R. St. Dennis, P. Stachour, E. Frankowski,
and E. Onuegbe, "Measurable Characteristics of Reusable Ada
Software,"Ada Letters, Vol. VI, No. 2, March-April 1986, pp. 41 - 50.

[Safford, 1987]. H.D. Safford, "Ada Object-Oriented Design Saves
Costs," Government Computer News, Vol. 6, No. 19, September 25, 1987.
page 108.

[Shlaer and Mellor, 1988]. S. Shlaer and S.J. Mellor, Object-Oriented
Systems Analysis: Modeling the World In Data, Yourdon Press:
Prentice-Hall, Englewood Cliffs, New Jersey, 1988.

[Shlaer and Mellor, 1989]. S. Shlaer and S.J. Mellor, "An
Object-Oriented Approach to Domain Analysis," Software Engineering
Notes, Vol. 14, No. 5, July 1989, pp. 66 - 77.

[Schmucker, 1986]. K.J. Schmucker, "Object Orientation," MacWorld,
Vol. 3, No. 11, November 1986, pp. 119 - 123.

[Simon, 1981]. H.A. Simon, The Sciences of the Artificial, Second
Edition, MIT Press, Cambridge, Massachusetts, 1981.

[Strachey, 1967]. C. Strachey, Fundamental Concepts in Programming
Languages, Lecture Notes, International Summer School in Computer
Programming, Copenhagen, August 1967.

[Tracz, 1987]. W. Tracz, "Ada Reusability Efforts: A Survey of the
State of the Practice," Proceedings of the Joint Ada Conference, Fifth
National Conference on Ada Technology and Washington Ada Symposium,
U.S. Army Communications-Electronics Command, Fort Monmouth, New
Jersey, pp. 35 - 44.

[Tracz, 1988]. W. Tracz, Editor, Tutorial: Software Reuse: Emerging
Technology, IEEE Catalog Number EH0278-2, IEEE Computer Society Press,
Washington, D.C., 1988.

[Ungar and Smith, 1987]. D. Ungar and R.B. Smith, "Self: The Power of
Simplicity," OOPSLA '87 Conference Proceedings, Special Issue of
SIGPLAN Notices, Vol. 22, No. 12, December 1987, pp. 227 - 242.

[Yonezawa and Tokoro, 1987]. A. Yonezawa and M. Tokoro, Editors,
Object-Oriented Concurrent Programming, The MIT Press, Cambridge,
Massachusetts, 1987.

[Zdonik and Maier, 1990]. S.B. Zdonik and D. Maier, Readings in
Object-Oriented Database Systems, Morgan Kaufmann Publishers, Inc. San
Mateo, California, 1990.