[comp.sw.components] Non-Code Software Components

eberard@ajpo.sei.cmu.edu (Edward Berard) (10/12/89)

I have been involved in software reusability for a number of years.
During this time I have:

	- conducted research in various aspects of software
	  reusability (e.g., domain analysis, reusability metrics, and
	  reusable software construction)

	- conceived of, implemented, and sold on a commercial basis a
	  large (>512K lines of source code) library of reusable
	  components

	- developed and delivered a number of software applications
	  which made conscious and extensive use reusable components

	- developed and delivered a number of courses on software
	  reusability 

	- provided consulting on software reusability to a number of
	  clients

	- authored a number of papers, and chaired a few sessions on
	  the topic

I do not consider myself an expert on software reusability, but I
would say that I have some experience in the area.

One thing that continues to amaze me is the lack of understanding as
to just what "software" is. Before one can discuss reusing something,
you must know what that something is. When software reuse is
discussed, very often one gets the impression that "software" is:

	a. source code

	b. object code

and nothing else.

I submit that software encompasses much more than mere source code and
object code. For example, I consider all of the following to be
software: 

	- code fragments
	- modules (in the subroutine/procedure/function sense)
	- packages (in the package/module/class sense)
	- large collections (in the library/subsystems sense)
	- applications (in the complete, stand-alone sense)
	- scaffolding code (produced during development, but not
	  part of the delivered product)
	- test data
	- software quality assurance information
	- the deliverable products of feasibility studies
	- the deliverable products of analysis efforts
	- designs
	- plans
	- standards
	- tools
	- environments

You will note that not everything on this list is code software.
Non-code software includes such things as test data, designs, plans,
software quality assurance information, standards, the products of
analysis efforts, and the products of feasibility studies.

Unfortunately, most of the effort in software reusability systems
seems to be focused on code software. Ironically, this is the area
where the return on investment seems to be the lowest. For example, if
analysis and design consume a significantly larger share of the
software life-cycle than does coding, reuse of analysis and design can
potentially provide significantly greater returns than the mere reuse
of code.

The reuse of non-code software involves different issues than does the
reuse of code software. For example, non-code software may contain
graphical information, as well as textual information. Many so-called
reusability systems are not set up to handle graphical
representations. The taxonomies, evaluation criteria, and
interconnections of non-code software are often different from the
same items for code software.

Reusability technology tells us that:

	- The larger a component is, the higher will be the payoff
	  from the reuse of that component, however

	- The larger a component is, the lower will be the probability
	  that component will actually be reused.

This means that domain analysts (i.e., those charged with identifying,
documenting, and configuration managing reusable components) must
constantly balance the size of a component against its potential for
reuse. They must find both the granularities of components, and the
mix of components, which will provide the highest return.

Given that reuse of analysis and design may potentially provide more
benefits than the simple reuse of source code (in fact, may even drive
the structure and selection of reusable source code), we need to ask:

	What should the granularity of analysis and design products be
	to maximize their reusability, and thus our investment in
	making them reusable?

Of course, there are many other issues, e.g., taxonomies, software
engineering approaches (e.g., object-oriented), horizontal vs.
vertical reuse, and documentation styles.

I propose a thread of discussion focusing on the reuse of non-code
software. Further, I propose that, at least at the start, one of the
main topics be the reuse of analysis and design, and, more
specifically, what types of analysis and design products provide a
suitable granularity (i.e., one which has a high potential for a
return on investment (ROI)).

I have some ideas on the subject, but I will wait to see if there is
any interest.

				-- Ed Berard
				   Phone: (301) 353-9652
				   FAX:   (301) 353-9272

P.S.: I know it will seem strange to discuss reusable software
      components in comp.sw.components (sometimes referred to as
      alt.software-eng), but, trust me, it will be worth it. ;-)

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) (10/12/89)

From eberard@ajpo.sei.cmu.edu (Edward Berard):
> Non-code software includes such things as test data, designs, plans,
> software quality assurance information, standards, the products of
> analysis efforts, and the products of feasibility studies.

    Wouldn't this really be "software infrastructure", rather than
    software?  Whatever we call it, though, infrastructure reuse
    is certainly an interesting topic.   

> I have some ideas on the subject, but I will wait to see if there is
> any interest.

    Let's hear them!!


    Bill Wolfe, wtwolfe@hubcap.clemson.edu

scotth@boulder.Colorado.EDU (Scott Henninger) (10/12/89)

|>From: eberard@ajpo.sei.cmu.edu (Edward Berard)
|
|I submit that software encompasses much more than mere source code and
|object code. 

Absolutely.  I refer you to Ted Biggerstaff's article in IEEE Computer
(July, 1989).  He stresses the use of a wide scope of software work
products to recover the design issues for source code.  It's not quite
to the point you make, but is the only work I know of that explicitly
tries to draw from sources other than raw code for reuse.

|Unfortunately, most of the effort in software reusability systems
|seems to be focused on code software. Ironically, this is the area
|where the return on investment seems to be the lowest. For example, if
|analysis and design consume a significantly larger share of the
|software life-cycle than does coding, reuse of analysis and design can
|potentially provide significantly greater returns than the mere reuse
|of code.
|[...]
|Given that reuse of analysis and design may potentially provide more
|benefits than the simple reuse of source code (in fact, may even drive
|the structure and selection of reusable source code), [...]

It can go the other way too.  The availability of reusable code
components can drive the analysis and design.

A common mistake in software reusability systems is the assumption that
the management of software projects will remain the same.  If you are
consciously striving to reuse existing software the processes and
proportion of time spent on these processes will change.  

For example, time must now be spent finding and assessing reusable
components.  This is not accounted for in traditional software
management techniques.  If you believe my above statement and also
realize that time must also be spent on indexing new code for future
reusability, I think you can see that SE techniques must adapt to the
very different programming techniques involved in reusing software.


-- Scott
   scotth@boulder.colorado.edu

rcd@ico.ISC.COM (Dick Dunn) (10/14/89)

eberard@ajpo.sei.cmu.edu (Edward Berard) writes:

> Unfortunately, most of the effort in software reusability systems
> seems to be focused on code software. Ironically, this is the area
> where the return on investment seems to be the lowest. For example, if
> analysis and design consume a significantly larger share of the
> software life-cycle than does coding, reuse of analysis and design can
> potentially provide significantly greater returns than the mere reuse
> of code.

I'm surprised to see this statement.  Perhaps it's true, but from where I
sit (in an organization which does a LOT of contract development of soft-
ware), we re-use bits and pieces all through projects.  We'll start looking
at a new contract, say at the proposal phase, and realize that it has some
major pieces which are close to major pieces of other contracts we've done.
It's only natural to find that, since our repeat or referred business is
based on a reputation established from previous work.

It's important to re-use work from the front-end phases of projects.
That's how you improve your ability to size projects--if you've done it
before, you *know* how long it will take.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...No DOS.  UNIX.

duncan@dduck.ctt.bellcore.com (Scott Duncan) (10/16/89)

In article <16210@vail.ICO.ISC.COM> rcd@ico.ISC.COM (Dick Dunn) writes:
>
>    (in an organization which does a LOT of contract development of soft-
>ware), we re-use bits and pieces all through projects.  We'll start looking
>at a new contract, say at the proposal phase, and realize that it has some
>major pieces which are close to major pieces of other contracts we've done.
>It's only natural to find that, since our repeat or referred business is
>based on a reputation established from previous work.
>
>Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
>   ...No DOS.  UNIX.

This brings up a point that came up when I was talking to people who do con-
tracting for the US Government.  They felt that, in many instances, reuse was
discouraged for this very reason.  That is, if a vendor/contractor were using
software, documentation, e.g., anything developed on another contract, the
government agencies did NOT want to have the bid include any cost for reuse of
such software.  After all, it had been paid for already.

Has your organization been able to solve this problem in some way or do you,
in fact, reduce the contract bid based on the cost saved by not redeveloping
the software (or whatever other artifacts are reused)?

Another issue was reuse of software by one contractor which had been develop-
ed by another.  Supposedly this could save the government (and the contractor
reusing it) money (and time), but there was no incentive to do so because of
maitenance issues and because it actually represented a loss in revenue to the
firm reusing the software since they could NOT charge for its redevelopment.

It seems to me that within organizations and between them, the issues of who
does ongoing support for reused software and how reused software will be paid
for are major, non-technical, stumbling blocks to moving ahead with large re-
use efforts.

What opinions/experiences do other people have?  One contractor suggested to me
that the governmment might consider offering a 'reward' for reuse in that half
the savings would be passed along to the government while half would be divided
equally between the developing and reusing organizations.  This would provide
incentive to both develop reusable software as well as make use of it,
presumably.

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (201-699-3910 (w)   609-737-2945 (h))

dwiggins@atsun.a-t.com (Don Dwiggins) (10/18/89)

In article <598@ajpo.sei.cmu.edu> eberard@ajpo.sei.cmu.edu (Edward Berard) writes:

   I propose a thread of discussion focusing on the reuse of non-code
   software. Further, I propose that, at least at the start, one of the
   main topics be the reuse of analysis and design, and, more
   specifically, what types of analysis and design products provide a
   suitable granularity (i.e., one which has a high potential for a
   return on investment (ROI)).

OK, I'll pick up the thread with a request for setting the parameters.  It
seems to me that the interpretation of the terms "analysis" and "design" is
less than well settled.  Design, for example, can occur in several settings
and at several levels; it would be useful to distinguish among them, and
discuss reusability in the different contexts.

With regard to analysis, I have a question arising from your message.  Early
on, you say that you've

	- conducted research in various aspects of software
	  reusability (e.g., domain analysis, ......

Later, you say

       This means that domain analysts (i.e., those charged with identifying,
       documenting, and configuration managing reusable components) must
       constantly balance the size of a component against its potential for
       reuse. They must find both the granularities of components, and the
       mix of components, which will provide the highest return.

I had the impression that "domain analysis" was the sort of thing that would
be done for a class of applications intended to run in a common domain, to
identify common classes of data, algorithms, report types, etc.  Your
characterization of the task of a "domain analyst" seems to be more like
that of a database administrator for a component database.  Where am I
confused?

--
Don Dwiggins				"Solvitur Ambulando"
Ashton-Tate, Inc.
dwiggins@ashtate.uucp
dwiggins@ashtate.a-t.com

eberard@ajpo.sei.cmu.edu (Edward Berard) (10/19/89)

Don Dwiggins (dwiggins@atsun.a-t.com) writes:
> OK, I'll pick up the thread with a request for setting the parameters.  It
> seems to me that the interpretation of the terms "analysis" and "design" is
> less than well settled.  Design, for example, can occur in several settings
> and at several levels; it would be useful to distinguish among them, and
> discuss reusability in the different contexts.

If you study software engineering technology, and software engineering
history, several things become apparent:

	- The development part of the software life-cycle is basically
	  a series of transformations. Specifically, if a
	  non-technical (in the software engineering sense) user could
	  interact directly with the computer, without having to
	  actually develop an application, there would be no need for
	  software engineers for many everyday applications. However,
	  users currently have to go through intermediaries (i.e.,
	  software engineers and their managers). The job of these
	  intermediaries is to help formulate, clarify, and translate
	  the users' requests into a form which the computer can
	  understand. This formulation, clarification, and translation
	  process is what we often refer to as the development part of
	  the software life-cycle.

	  Traditionally, (systems, requirements, and/or user) analysis
	  was the first step in this transformation process. The idea
	  was to help both the client and the software engineers
	  better understand the problem to be solved. One of the
	  outputs of the analysis process was a description of the
	  system, _as_ _it_ _would_ _appear_ _to_ _the_ _client_.
	  Equally traditionally, the process which followed analysis
	  was referred to as design. The usual concept was that
	  designers implemented the internal (software) architecture
	  of the system. Designers were supposed to discuss things in
	  terms of modules (e.g., subroutines), and other technical
	  (from the software engineering perspective) issues. Design
	  was traditionally followed by programming (coding).

	- Even in the "good old days" people had trouble
	  differentiating between analysis and design. When did
	  analysis stop? When did design begin? How do I
	  systematically take the output of the analysis process and
	  use it as the input to the design process? Some people began
	  to realize that the boundaries between life-cycle phases
	  really were fuzzy.

	- Some people realized that the development of a software
	  product was more a continuous process than it was a series
	  of discrete steps. Some even realized that the distinction
	  between the analysis, design, coding, and testing phases
	  were purely artificial, and were very often used by
	  management as tools to control a software development
	  effort.

	  If you view a software development effort as a huge
	  monolithic entity, that means that you may have to invest
	  large sums of resources, for an extended period of time,
	  with very little idea of what the final results will be. On
	  the other hand, if you introduce phases into the life-cycle,
	  and specify deliverables for each phase, you have much more
	  control over the overall process. You can make "mid-course
	  adjustments," "go/no go" decisions at the end of each phase,
	  spot trouble earlier, and generally have a much better idea
	  of actual progress. Please note, however, methods,
	  methodologies, deliverables, well-defined phases, and the
	  mere presence of management do not guarantee success.

	- Others have observed that analysis and design, for example,
	  even take place in the maintenance phase of the software
	  life-cycle.

	- There are many different overall approaches to the software
	  life-cycle, e.g., waterfall, spiral, b-model, and
	  recursive/parallel. (In some of my classes, I show people at
	  least six different approaches.) In object-oriented software
	  engineering, a very commonly used approach is
	  recursive/parallel, sometimes referred to as "analyze a
	  little, design a little, implement a little, test a little."
	  Yes, this means that analysis, design, and implementation
	  may take place at many different points during development.

	- Depending on how you view the terms:

		- Analysis and design are precursors to coding, and do
		  not involve coding.

		- Each time a single character of source code is
		  changed you are actually re-designing. (Notice that
		  software maintenance differs from hardware
		  maintenance in this respect. Specifically, hardware
		  maintenance may involve replacing a defective part
		  with an identical part, whereas software maintenance
		  requires the introduction of different parts.)

		- Analysis may involve analysis of a piece of source
		  code with the intent of finding a problem, or
		  suggesting optimizations or extensions.

		- Et cetera ...
 
> With regard to analysis, I have a question arising from your message.  Early
> on, you say that you've
> 
> 	- conducted research in various aspects of software
> 	  reusability (e.g., domain analysis, ......
> 
> Later, you say
> 
>        This means that domain analysts (i.e., those charged with identifying,
>        documenting, and configuration managing reusable components) must
>        constantly balance the size of a component against its potential for
>        reuse. They must find both the granularities of components, and the
>        mix of components, which will provide the highest return.
> 
> I had the impression that "domain analysis" was the sort of thing that would
> be done for a class of applications intended to run in a common domain, to
> identify common classes of data, algorithms, report types, etc.  Your
> characterization of the task of a "domain analyst" seems to be more like
> that of a database administrator for a component database.  Where am I
> confused?

I suspect it is in your interpretation of the word "component." I have
no problem with "components" being "common classes of data,
algorithms, report types, etc." However, I think we can better define
what we are after in domain analysis.

First, we need some definitions:

	- "An investigation of a specific application area that seeks
	  to identify the operations, objects, and structures that
	  commonly occur in software systems within this area."
			-- Dan McNicholl

	- "Systems analysis states what is done for a specific problem
	  in a domain while domain analysis states what can be done
	  in a range of problems in a domain. ...  A domain
	  analysis is only useful if many similar systems are to
	  be built so that the cost of the domain analysis can be
	  amortized of all the systems.

	  "The key to reusable software is captured in domain analysis
	  in that it stresses the reusability of analysis and design,
	  not code."
			-- Jim Neighbors

	- "A horizontal domain analysis studies a number of different
	  systems across a variety of applications."
			-- Grady Booch

	- "A vertical domain analysis studies a number of systems
	  intended for the same class of applications."
			-- Grady Booch

We can add some more clarifications:

	- Domain analysis is _not_ a software life-cycle activity. It
	  has a beginning, but, usually, never stops. It runs
	  in parallel with the life-cycles of many software products. 

	- Domain analysts must identify reusable items in their
	  application domains, however they will usually be able to
	  classify what they find as:

		- reusable, and highly domain-specific, i.e., these
		  items will have a very low potential for reuse
		  outside of the defined domain. These items are
		  usually in the minority of the items encountered.

		- reusable, and not specific to the defined domain,
		  i.e., these items will have at least some
		  "horizontal reuse potential." These items are
		  usually in the majority of the items encountered.

	- Unfortunately, many people tend to focus on only low-level
	  items (e.g., source code, object code, algorithms, data)
	  during domain analysis. Much larger payoffs are achieved
	  when higher-level items (e.g., plans, standards, design
	  products, analysis products) are also considered.

	- A domain analyst must not only be familiar with the specific
	  domain, he or she must also be familiar with what they are
	  supposed to be looking for (e.g., objects and classes), he
	  or she must have excellent abstraction skills, and he or she
	  must be well-versed in software reusability technology.

	- In functional domain analysis, we are looking for things
	  that fit a functional viewpoint of systems in our domain.
	  That is, we are looking for reusable functions, libraries of
	  functions, and, in general, items which will be useful to
	  those software engineers who have taken a functional
	  decomposition approach to the software life-cycle.

	- In object-oriented domain analysis, we are looking for
	  objects, classes, systems of objects, and, in general, items
	  which will be useful to software engineers taking an
	  object-oriented approach to the software life-cycle.

A domain analyst must do much more than simply point to an item and
say that the item is reusable. There is much more work to be done.
Without going into a full-blown course on domain analysis, some of the
things a domain analyst can be expected to do are:

	- Concisely and precisely define the application domain.

	- Identify a representative sample of applications which span
	  the domain, and use this sample to begin to extract
	  potentially reusable items.

	- Accurately name and document each reusable item. Of course
	  this brings up issues like nomenclature and definitions.

	- Clean up, parameterize, and otherwise refine potentially
	  resuable items.

	- Identify additional reusable items suggested by those
	  already identified.

	- Store and retrieve both the reusable items, and information
	  associated with the items.

	- Develop guidelines for the (re)use of the reusable items.

	- Demonstrate reuse using the reusable items and the
	  guidelines.

	- Interact with all those involved in the software engineering
	  process, e.g., requirements analysts, testers, designers,
	  coders, software quality assurance personnel, managers, and
	  maintenance personnel. The specifics of these interactions
	  are too involved to go into in this short message.

There is much more I could say. (Yes, I do teach a course on the
subject.) But this message is already too long. Thanks for listening.

				-- Ed Berard
				   (301) 353-9652

kcby@ncratl2.Atlanta.NCR.COM (kcby) (10/20/89)

The idea of reusing analysis and design components of software development
efforts is very interesting to me.  In fact, as someone who has been
involved primarily with the analysis and design "phases" of projects for a
number of years, it may rank as one of my *major* interests. :-)

Here's my problem. The information associated with the analysis and design
must somehow be documented, or it can only be reused by the people who were
around when the original analysis and design were done.  Typically we have
'encoded' some of this information in specifications either in English (or
other native language) or by using some diagraming technique. Actually,
what we have recorded is the *results* of the analysis and design. The
analysis and design effort took place in meetings, discussions, etc. which
may or may not have been documented through letters, meeting notes,
technical notes, etc.

As an individual, I have maintained analysis and design information from
past projects, and have made use of (some of) it on new projects.  The
information (handwritten papers, meeting notes, letters, faxs, paper
napkins with preliminary design diagrams on them, etc.) is currently
organized in chronological order, by project. As such, *I* can (sometimes)
find the information which I need when it comes time to reuse it, because I
was responsible for storing it. However, just having kept the information
doesn't mean it is reusable.  At times I can't find information I know is
there, and worse, other people (even when given access to the "paper")
haven't the foggiest idea where to start, or whether the information is event
there.

Now a few questions:

  What would analysis and design information look like if I wanted that
  information to be reusable by someone other than me? (freeform text?
  structured text? defined by a specific methodology? up to the
  originator?)

  How can it be made accessible? (Linear search through 15+ binders of
  paper organized in chronological order and associated with a single
  project doesn't work!)

  Is there enough reuse possible to justify the (apparently) extra effort
  required to document more than just the results of analysis or design?
  (That is, how do I convince people who don't even want to write down the
  *results* of analysis and design (in specifications) that they should
  somehow document the information they used to produce the analysis and
  design specs?)

I hope the orignal poster will forgive me if this has gotten off of his
topic somewhat. But I believe there is at least some relationship involved.

KC Burgess Yakemovic
NCR Corporation, RSD Atlanta
email: kcby@Atlanta.ncr.com    
phone: 404-441-8135