[comp.software-eng] Soft-Eng Digest V4 #15

MDAY@XX.LCS.MIT.EDU ("Mark S. Day") (03/11/88)

Soft-Eng Digest             Thu, 10 Mar 88       Volume 4 : Issue  15 

Today's Topics:
          Call for Papers: Northwest Software Quality Conf.
                       Looking for CASE Tools
                           CASE Tool Usage
                       CHI '88 Discussion Group
                Conference on Software Maintenance '88
              Coordinating Software Development (2 msgs)

----------------------------------------------------------------------

Date: 24 February 88 07:48-PAC
From: COSTEST%IDUI1.BITNET@CUNYVM.CUNY.EDU
Subject: Call for Papers: Northwest Software Quality Conf.

CALL FOR PAPERS:  Sixth Annual Pacific Northwest Software Quality
Conference

The conference will be held September 19-20, 1988 at the Portland
Marriott, Portland, Oregon.  This year's keynote speaker is Edward
Yourdon.  Original papers are solicited on all aspects of software
quality.  Papers will be selected on the basis of abstracts, and
refereed for publication in the proceedings.  There will be $600
awarded for best papers.

Topics of interest include (but are not limited to): productivity and
quality, re-engineering software, real-time software, project
management, user interfaces, CASE environments, off-the-shelf
software, parallel software, object-oriented design, cost/schedule
estimation, design methodologies, documentation, reusability, version
control, case studies, testing technology, risk management, software
metrics, marketing quality, software standards, and liability.

Abstracts should include enough detail to give reviewers an
understanding of the the final paper, including a rough outline of its
contents.  Describe what has been learned, the work's implication, in
preference to giving detail of what was done.  The abstract should be
2-4 pages long.  Please indicate if the work is experimental or
theoretical, whether software has been implemented, whether a case
study is described, and if the most likely audience is managerial or
technical.  On a separate sheet provide the paper title, complete
mailing address, telephone, e-mail address, a list of keywords/phrases
identifying the subject, and a brief bibliographical sketch of the
author(s).

Submit 5 copies by April 1 to:

      Dick Hamlet, Program Chair
      Oregon Graduate Center
      19600 NW von Neumann Drive
      Beaverton, OR  97006  USA
      CSNET:  hamlet@cse.ogc.edu

------------------------------

Date: 25 Feb 88 17:09:17 GMT
From: sdcc6!loral!shawn@ucsd.edu  (Shawn McFarr)
Subject: Looking for CASE Tools

I am looking for CASE tools that would use either JSD, OOD, or Data Flow
design methodologies. If anyone has used any of these methodologies, Could
you also tell me what you think their weaknesses/strengths might be?
In case of Jackson System Development, can you tell in what applications
you used his JSD approach?

If there is enough interest, I will post the results.

Shawn McFarr
Loral Instrumentation, San Diego
{ucbvax, ittvax!dcdwest, akgua, decvax, ihnp4}!sdcsvax!sdcc6!loral!shawn

-- 
Shawn McFarr
Loral Instrumentation, San Diego
{ucbvax, ittvax!dcdwest, akgua, decvax, ihnp4}!sdcsvax!sdcc6!loral!shawn

------------------------------

Date: Wed, 24 Feb 88 14:52:55 PST
From: jon@june.cs.washington.edu (Jon Jacky)
Subject: CASE Tool Usage

I have been looking at commercial CASE tools - in particular the products based
on bubbles-and-arrows diagrams (including Excelerator from Index Technology,
Teamwork from Cadre, and several others) - to consider how useful they might be
for a project my lab is embarking on - a control system for a medical cyclotron
and associated machinery for performing neutron radiation therapy for cancer.

Here is a memo (with references) summarizing my findings.  I emphasize that I
have NOT actually used any of these CASE tools; these conclusions are based on
surveys of journal and trade magazine articles, advertising, informal
conversations with users, and a few demonstrations I saw.

To summarize, I concluded that this class of CASE tools doesn't address
the problems we find most difficult, namely specifying the behavior of the
system, and designing a good set of tests.  Moreover, the bubbles-and-arrows
diagrams don't really express anything that doesn't appear in a program
outline consisting only of data declarations, procedure and function
declarations, and a program skeleton consisting only of procedure and function
calls, written in a strongly-typed language with nested scope, such as Pascal.
The bubble-and-arrow diagrams simply provide an alternate, graphic
representation of such a text.

One of the reasons I am submitting this to SOFT-ENG DIGEST is to learn whether
other readers believe this is a correct and fair appraisal, or whether I am
missing something.

- - Jonathan Jacky
University of Washington

(Here follows the text of my memo - bibliographic references at the end)

- -------------------------------------------------------------------------------
- -

Structured Analysis (SA) is a catch-all term that describes the many similar
methodologies based on _data flow diagrams_ that consist of boxes and bubbles
with arcs and arrows going in and out.  One begins by deciding what the
boundary is between the system to be specified and the outside world.  Then you
draw a big box representing this boundary, and draw an arrow going into the box
for every input, and an arrow coming out of the box for every output.  You
label all the arrows, and as you do you make entries into a _data dictionary_
to keep track of these labels and what they represent.  For the cyclotron
control system, one arrow going into the box might represent the cyclotron
operator's terminal keyboard and one arrow coming out might represent the
status display screen. Then you consider the inside of the big box and you fill
it in with several other boxes, each representing a subsystem of the whole
system. You apportion the input and output arrows to the subsystem boxes and
add some more arrows to connect the subsystems, keeping the data dictionary up
to date as you go (descriptions of the operations performed by each subsystem
are also kept in the data dictionary).  Then you start a new diagram for each
subsystem box, dividing the interior of each subsystem into sub-subsystems. You
just keep going until you reach the level of detail at which you feel that the
specification is complete.

Structured analysis created a lot of excitement when it appeared in the mid
1970's.  Bergland provides a good review of several variants [1].
Disillusionment came when projects collapsed beneath the effort of
keeping all those diagrams and dictionaries up to date when changes had to be
made  [2].  For projects of significant size, SA is just not a practical
paper-and-pencil technique.  SA is currently enjoying a renaissance thanks to
the appearance of powerful graphics workstations.  Many vendors offer software
packages that allow you to draw the dataflow diagrams right on the screen,
meanwhile typing data dictionary entries into another window. The packages
automatically check the consistency between the various diagrams and the
dictionary, and propagate changes when they are made.  Up-to-date hardcopy can
be printed whenever it is needed.  These products are called CASE packages, for
"computer-aided software engineering."  Many shops use them and find them
very helpful.

Much of the appeal of SA is that the specifications suggest how to write the
program.  You just keep breaking down the subsystem boxes until you obtain
units  which are simple enough to identify with subroutines or procedures or
tasks or whatever the building blocks of your chosen programming language are.
Then you assign the units to programmers and they can start coding; everything
ought to work when you put the pieces together.  In fact some of the products
use the data dictionary and the diagrams to generate a sort of skeleton or
template program with the procedure headers and data declarations already
written in your language of choice.  Programmers need only fill in the
executable statements. [3]

It is important to note what SA does _not_ attempt to do.  SA addresses the
problem of breaking a big system down into subsystems; it simply begs the
question of how to specify the behavior of the subsystems.  It is still up to
the users to solve this problem.  Most SA products provide some way to associat
e
descriptive text with each subsystem.  This can be used to convey an informal
English-language description.  Some products define somewhat more formal
notations for this purpose, including programming language-like notations
called _pseudocode_ or _program development languages_ (PDL's), or special
notations for describing finite state machines.

Moreover, SA does not really specify how the various subsystems are invoked.
Are the various subsystems separate procedures which are invoked in turn,
perhaps in some kind of left-to-right order?  Or are they concurrent processes
which run simultaneously, each consuming the output of its predecessor as it
appears?  Both alternatives, and many others besides, are possible. SA was
originally created for business data processing and the classic data flow
diagrams left out a lot of the information needed to specify control systems,
including timing, concurrency, and synchronization.  This had to be handled in
the same old informal way.  To help remedy this, there is a variation, now
supported by several of the commercial products, in which different kinds of
arrows are used to represent data and various kinds of control signals [4,
5].  At least one vendor supports a detailed design and modelling stage in
which the subsystem boxes are translated to concurrent processes and the
various kinds of arrows are translated to the interprocess communication
mechanisms supported by the host operating system, such as queues and
mailboxes.  This product will even estimate response times to various kinds of
events. [6]

SA really deals with internal design and implementation more than
specification. Only the first diagram, the one consisting of a single big box,
really represents the requirements specification proper.  When the system is
very large, including hundreds of input and output signals, the graphic
representation is probably less useful than a more tabular text format.

A collection of data flow diagrams contains essentially the same information as
a program in Pascal (or any other strongly-typed language) consisting only of
declarations and procedure and function calls.   The data dictionary
corresponds to Pascal type declarations and variable declarations.  The various
boxes and bubbles correspond to procedure declarations.  The arcs and arrows
are the passed parameters (to be rigorous, incoming arrows should be value
parameters and outgoing arrows should be reference (Pascal "var") parameters).
The consistency checking is essentially what you get from any Pascal compiler,
which ensures that each procedure is defined before it is used, that each
procedure uses parameters consistent in number and type with its definition,
and (in many compilers) warns when a variable is used before it is set.

The diagrammatic representation is simply another way to view the decomposition
of a program into smaller units, which many people find helpful.  However, it
requires additional hardware and software, is less compact, and probably
requires more time to prepare, than the text representation.  Moreover, it
introduces new problems unrelated to the real goal of designing a system.  For
example, CASE product users tend to spend a lot of time working with (some
users say "struggling with") the graphics editor for data-flow diagrams, trying
to make diagrams that are attractive and easy to read.  The various products
offer more or less assistance with this, but in any case these diagrams really
ought not to be regarded the final product of the development effort. Moreover,
certain features of some products are not well-matched to subsequent stages of
development, or to good design practices in general.  For example, in one
product I saw, data dictionary entries were not written in the target
programming language, requiring a redundant manual translation step. Moreover,
the data dictionary was "flat" - data items from the various subsystems, which
should have been visible only within their own subsystems, all appeared
together in the same list with the items from the top level system that
had to be visible throughout.

The value CASE products offer is primarily managerial rather than technical.
They enforce the discipline of defining the data and the operations on them
before beginning coding, and provide a convenient way to generate a lot of
attractive documentation.  This is probably very valuable in many enviroments.


REFERENCES

1. Bergland, D.G.  Structured design methodologies, Fifteenth Annual Design
Automation Conference Proceedings, IEEE, 1978, 475 - 493. Reprinted in
Bergland, G.D. and Gordon, R.D. Software Design Strategies, IEEE Computer
Society, 1981, 297 - 315.

2. Yourdon, E.  Whatever happened to structured analysis?  Datamation,
June 1 1986, 133-136.

3. Olson C., Webb W., and Wieland R. Code generation from data flow diagrams.
Third International Workshop on Software Specification and Design.  IEEE
Computer Society, 1985, 172 - 176.

4. Ward, P.T.  The transformational schema: an extension of the data flow
diagram to represent control and timing.  IEEE Trans. Software Eng. SE-12(2)
February 1986, 198 - 210.

5. Ward, P.T. and Mellor, S.J. Structured development for real-time systems.
Yourdon, 1985.

6. Manuel, T. At last: a toolkit for real-time software.  Electronics,
Sept. 17, 1987, 81 - 83.

------------------------------

Date: Fri, 26 Feb 88 08:45:57 EST
From: Paul Kahn <PDK%BROWNVM.BITNET@mitvma.mit.edu>
Subject: CHI '88 Discussion Group

I would be interested in participating in a workshop on coordinating
user interfaces for consistency at the CHI '88 conference. Several of us
from the Institute for Research in Information and Scholarship (IRIS) are
planning to attend: myself, Jim Coombs (jhc%iris.brown.edu) and Karen
Smith (kes%iris.brown.edu). We are working on extensions to Intermedia,
the hypermedia system under development here, for dealing with large
external databases: dictionaries, full-text collections, citation
collections, in heterogeneous formats.

------------------------------

Date: 25 Feb 88 09:44:00 EST
From: <osborne@icst-ecf.arpa>
Subject: Conference on Software Maintenance '88

        Conference on Software Maintenance-1988 CSM-88

        Phoenix, Arizonia
        October 24-27, 1988

        Sponsors: 
        IEEE Computer Society
        National Bureau of Standards
        DPMA
        ACM-Sigsoft
        AWC
        SMA


        The Conference on  Software  maintenance  will  gather  software
        managers,  developers,  maintainers,  and researchers to discuss
        new solutions to the continuing chalenge of software maintenance
        and   software   maintainability.   CSM-88  will  focus  on  the
        processes that impact software maintenance.  CSM-88 seeks papers
        which clearly indicate advances in the field; the time frame for
        achieving those advances; and any evidence suggesting  that  the
        advances can be realized.

        Theme: "Making a Difference: Improving The Product 
                By Improving The Process"

        Topics:

        Papers on any aspect of software engineering designed to improve
        the maintainability of the software and the
        state-of-the-practice:
                    Environments
                    Empirical Studies
                    Measurement and Metrics
                    Expert Systems/AI
                    Ada Programs
                    Reusability
                    Configuration Management
                    Developing Maintainable Software
                    Standards
                    Distributed Systems
                    Education
                    Testing
                    Impact of PDLs
                    Restructuring/Reengineering
                    4GLs
                    Other Tools and Technigues

        *************Submission Date: March 18, 1988*****************

------------------------------

Date: Thu, 25 Feb 88 17:45:59 EST
From: Dan Franklin <dan@WILMA.BBN.COM>
Subject: Coordinating Software Development

An approach we've taken in our group when a set of changes may take
weeks or longer is to put the changes into the source under #ifdefs.
The person doing the development sets up his/her compilation
environment to turn on the #ifdef'd code; everyone else has it off.
Thus, the changes affect only the developer making them, even when
they are checked back into the main RCS tree so other people can
modify the same files.  When the changes are finally ready to go,
the #ifdefs are removed everywhere in one big swoop.  (Actually
there are more gradual ways to install the changes, but that's
the simplest.)

This is particularly good for incompatible changes that spread over
several libraries in the system.  It does meant that every developer
has to look out for the #ifdefs and not stumble over them, and of
course the person who put them in has to watch for changes from other
people.  Since all our RCS log messages are sent as electronic mail to
all the developers on the project, this is pretty easy.  (The project
is large for BBN, 10 - 15 developers and almost 100K lines of C code,
but pretty small by other standards; this technique might not scale up
to large projects that well.)

	Dan Franklin
	BBN Laboratories Incorporated

------------------------------

Date: Mon 29 Feb 88 13:41:11-PST
From: Martin Feather <FEATHER@vaxa.isi.edu>
Subject: Coordinating Software Development

With regard to integrating different development branches of a software
product, we are facing similar issues in our research to support the
construction of software specifications.  We build specifications by
starting with trivially simple specifications, and then incrementally
elaborating them.  When elaborations are "independent" (or nearly so),
they are applied in parallel, leading to diverging specifications which
must later be "recombined" - equivalent to "integrating".

We achieve integration by "replaying" the elaborations in a linear order
(note that it is important to have remembered the elaborations, as well
as the specifications that they produced).  When elaborations are
"independent", they can often be replayed automatically.  When
elaborations are NOT independent (analogous to "interference" between
program versions), we compare the ELABORATIONS to determine the nature
of the dependence (dependence means there are further choices, requiring
further input from the specifier to select from among them).  This
comparison process is an area of ongoing research.

By "we" I mean the KBSA project here at ISI, headed by Lewis Johnson.
If anyone is interested, I can send them a copy of my paper, supposedly
to appear in IEEE TSE (I know not when).  Research into combination of
elaborations is also being pursued by Steve Fickas' group at the
University of Oregon, Eugene.

Martin S. Feather
ARPAnet:  feather@vaxa.isi.edu
USC/Information Sciences Institute, 4676 Admiralty Way #1000,
Marina del Rey,  CA 90292
(213) 822-1511

------------------------------

End of Soft-Eng Digest
******************************

-------