[comp.software-eng] CASE - The Emperor has no clothes on!

charlie@genrad.com (Charlie D. Havener) (06/08/90)

There has been a lot of talk about CASE tools in this group lately.
The questions seem to be where can I get one and how is Brand X
slightly better than Brand Y. It seems to me the real question is
'Are Structured Design CASE tools worth investing time, effort and money in?'
I have tentatively formed my decision. The answer is NO!

Recently I have done a lot of reading on CASE tools, experimented with
CADRE's TeamWork and IDE's StP ( Software Through Pictures $5k to $20k per seat ),
read much of the Hatley-Phirbhai text, and played with EasyCASE ( The $200
PC formerly ShareWare Product ).  I have also learned C++, taken courses in Object
Oriented Design and Analysis ( e.g. a seminar by Ed Berard ), read the Peter Coad
& Ed Yourdon text "Object-Oriented Analysis", and read parts of the new Grady-Booch
text "Object Oriented Design with Applications".

The message I get and believe is that Object Oriented analysis, design and implementation
via a supportive languge is superior in all ways to the old structured analysis approach.
It seem to me the vendors of these old case tools are blowing all the smoke and fog they can
to protect their lucrative markets which depend on management Fear, Uncertainty and Doubt.
The CASE vendors are happy as can be when one of their employees gets an article published
that, in effect, says 'Don't worry - Structured Analysis works just fine with objects', while
all the independent authors are saying it is completely orthogonal to Object Oriented techniques
and has very little if any use what-so-ever. Maybe it can be used to help implement some
complex method ( or member function ) of an object but other than that - forget it.

A quote from the Coad/Yourdon text, "Computer-Aided Software Engineering (CASE) -- it's hard
to believe that such simplistic software tools are getting so much attention these days.
....  a less sexy but more accurate name would be CADC - Computer Aided Drawing and Checking."

The government loves paper - these tools help you create a monument of paper. There are
some tools for Object Oriented approaches available ( e.g. Semaphore Tools, Andover Mass. )
but one of the nice thing about Objects is that you can do fine with no CASE tools at all.
The CRC ( Class Responsibility Collabortators ) method is one that uses 4 by 6 index
cards and it seems well on it's way to becoming a standard technique.

I've said enough. If anyone can provide a substantive rebuttal I would like to hear it.

---------------------------------------------------------------------------
These opinions are mine alone - I do not speak for GenRad on this subject
charlie@genrad.com      508-369-4400 x3302     GenRad Inc. Concord Mass.

pnm@goanna.cs.rmit.oz.au (Paul Big-Ears Menon) (06/09/90)

charlie@genrad.com (Charlie D. Havener) writes:

>...
>The CRC ( Class Responsibility Collabortators ) method is one that uses 4 
>by 6 index
>cards and it seems well on it's way to becoming a standard technique.
	
	I'm looking at this method to teach first year students - and it
    appears the most appropriate, to get them used to "thinking like an object".
    I would dearly love to find some further refs on this, apart from the 
    OOPSLA proceedings.

>I've said enough. If anyone can provide a substantive rebuttal I would 
>like to hear it.
	I can't, I can add something to explain why though (you mentioned it
	too).  CASE tools mean money, books, courses - a whole business.
	No one wants to promote something which doesn't make money.  As I
	see it, Object-Oriented design is *intuitive*, it's simple, but
	intangible - ie, there are no rules, formal methods/procedures
	one follows.  It is, in a sense, still an art to determine class
	structure, etc..  What CASE guys hate is - it's not strictly top-down, 
	has no flow-charts, nor structure diagrams.  CRC is ample, 
	technology free and is a great way to utilise those boxes of unused
	punch cards.  I am open to alternate views though and would also 
	like to hear them.

    Paul Menon,
    Dept of Computer Science,
    Royal Melbourne Institute of Technology, 
    124 Latrobe Street,
    Melbourne 3001, 
    Victoria, Australia.

pnm@goanna.cs.rmit.OZ.AU
PH:	+61 3 660 3209

jharkins@sagpd1.UUCP (Jim Harkins) (06/10/90)

In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
>'Are Structured Design CASE tools worth investing time, effort and money in?'
>I have tentatively formed my decision. The answer is NO!

I second the motion, they have nowhere near the power and flexibility of UNIX
or even DOS.

>The government loves paper - these tools help you create a monument of paper.

Yep, and the tools I've seen are oriented towards the MIL-SPEC 2167 which
specifies exactly what format this mound of paper will be in.  Note 2167 has
absolutely nothing to do with good code, in fact some of it's requirements
almost force bad code.  2167 is for government flunkies to cover their asses
when (not if, under 2167) things go wrong.

>I've said enough. If anyone can provide a substantive rebuttal I would like
>to hear it.

Me too, I'm successfully pushing for the abondonment of our CASE tool here.
Basically it addresses problems I don't have, doesn't do a damn thing for the
problems I do have, and causes me problems I wouldn't have if I didn't use the
thing.

I guess I should put a disclaimer in here as we are a government contracter.
My views are my own, if you want to know if management shares them you should
submit a Request For Proposal that we can use to generate a mound of paper :-)


-- 
jim		jharkins@sagpd1

I hate to see you go, but I love to see you walk away.

cox@stpstn.UUCP (Brad Cox) (06/10/90)

In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
>There has been a lot of talk about CASE tools in this group lately.
>The questions seem to be where can I get one and how is Brand X
>slightly better than Brand Y. It seems to me the real question is
>'Are Structured Design CASE tools worth investing time, effort and money in?'
>I have tentatively formed my decision. The answer is NO!

The trouble with Computer Aided Software Engineering is that it presumes
the existence of such a thing as Software Engineering.

How can robust engineering or even scientific practices ever develop in
a field so long as *everything* is reinvented from first principles?
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

kjeld@iesd.auc.dk (Kjeld Flarup) (06/11/90)

In article <3205@goanna.cs.rmit.oz.au> pnm@goanna.cs.rmit.oz.au (Paul Big-Ears Menon) writes:
>charlie@genrad.com (Charlie D. Havener) writes:
>
>>...
>>The CRC ( Class Responsibility Collabortators ) method is one that uses 4 
>>by 6 index
>>cards and it seems well on it's way to becoming a standard technique.
>	
>	I'm looking at this method to teach first year students - and it
>    appears the most appropriate, to get them used to "thinking like an object".
>    I would dearly love to find some further refs on this, apart from the 
>    OOPSLA proceedings.
>
>>I've said enough. If anyone can provide a substantive rebuttal I would 
>>like to hear it.
>	I can't, I can add something to explain why though (you mentioned it
>	too).  CASE tools mean money, books, courses - a whole business.
>	No one wants to promote something which doesn't make money.  As I
>	see it, Object-Oriented design is *intuitive*, it's simple, but
>	intangible - ie, there are no rules, formal methods/procedures
>	one follows.  It is, in a sense, still an art to determine class
>	structure, etc..  What CASE guys hate is - it's not strictly top-down, 
>	has no flow-charts, nor structure diagrams.  CRC is ample, 
>	technology free and is a great way to utilise those boxes of unused
>	punch cards.  I am open to alternate views though and would also 
>	like to hear them.

I have not read about the CRC method, but how is it to maintain.
When it comes to maintaining, SASD documentation is very informative.

Because the goverment want to have the information needed to review and later
maintain systems, the large SASD documentation fullfills this need.

Now does other development methods give a good documentation. And can
it be done properbly without a computer. I mean you do not write books in
handwriting anymore. Furthermore do you update documentation in handwriting ?

-- 
*     I am several thousand pages behind my reading schedule.    *
Kjeld Flarup Christensen                         kjeld@iesd.auc.dk

charlie@genrad.uucp (Charlie D. Havener) (06/11/90)

There have been several requests for further references on CRC cards.
CRC stands for Class name, responsibilities, collaborators. See OOPSLA
Proceedings 1989 pages 1-6. This is the original paper as far as I know.
It is by Kent Beck, and Ward Cunningham and is entitled "A Laboratory
for Teaching Object-Oriented Thinking". 

The best description I have read is in a soon to be published text on Object
Oriented Programming by tim budd, department of computer science,
oregon state university, corvallis, oregon 97331. He devotes Chapter 2 to the
method. He has used it in his course at Oregon State. I like the new
Grady Booch text on OOD but was suprised that he made no mention of the
CRC method.

Thre have been tutorial's on the method at the '89 OOPSLA, at the recent
SCOOP ( Seminars and Conference on OOP ) at the Wang Institute of BU, and
I hope there will be one at the Oct 1990 OOPSLA/ECOOP conference in Ottowa
Canada. ( Ottawa -- Hmmm that doesn't llok right either ).

bwb@sei.cmu.edu (Bruce Benson) (06/11/90)

In article <5190@stpstn.UUCP> you write:
>
>The trouble with Computer Aided Software Engineering is that it presumes
>the existence of such a thing as Software Engineering.
>
>How can robust engineering or even scientific practices ever develop in
>a field so long as *everything* is reinvented from first principles?

Does this failure to exercise discipline really imply a lack of engineering
know how?  Maybe it is still just too costly for what you get out of it?

How much has bridge building changed since the first bridge was built? Sure
the techniques and materials have changed dramatically, but a bridge of
today would still be recognized by a builder of centuries past.  The same
for buildings.  

In software we seem to always push the limit, it gets larger and more
complex - always at the limit of what we can do.  Doesn't a 4GL that 
generates database update and report applications represent the bridge 
construct?  The better we understand (and do not vary) the application
the better we get at generating it.

Isn't what we are trying to achieve with Software Engineering simply a 
restricted subset of the GPS (General Problem Solving) Algorithms of the
past decades?  Are we not trying to create a general problem solving (and
construction) algorithm by trying to find an effective software engineering
method that works for everything?

Are we not trying to solve problems we have not yet defined?  Those we have
defined, I agree, we don't always use the engineering disciplines that
we do know are effective.

Just wondering....

Bruce

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8496    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

gbeary@uswat.uswest.com ( Greg Beary ) (06/11/90)

Given the interest in CRC cards I thought I'd chime in. The CRC
approach is more like "storyboarding" than SA/SD. It is a 
vehicle for a group of developers/designers to discuss the
system they're working on.            

You specify the name of a class on the top of the card, its
repsonsibilites (on the left 2/3rds of the card), and its
collaborations (which classes-methods it interacts with) on
the right 1/3. You then use these cards to help decompose 
and explore the system under discussion. We have coupled
this with Ivar Jacobsen`s notion of Use Cases and 
will combine CRC cards to describe a particular Use 
Case. 

I have a paper on our approach to Object-oriented Analysis
and Design (OOAD) that I can share with interested parties.
(Unfortunately, it's been done on Mac with MSWord and 
I don't know how well it could be distributed over the net.)
The approach we use couples Use Cases, CRC cards, Use Case
based Object-Communication Models, and EER diagrams to 
perform OOAD. 

It is written on a "practioner's level" and makes no 
pretention to be a work of academic research.  I'd
be willing to US mail it to interested parties 
providing that they would be willing to critique it.
(Continued Process Improvement).
 
   
 
--
Greg Beary 
US West Advanced Technology  
6200 S. Quebec St.          
Englewood, CO  80111

gbeary@uswat.uswest.com ( Greg Beary ) (06/11/90)

I also wanted to mention that the Univ. of North Carolina
Continuing Ed. Department sponsored a training video 
on CRC cards. (At least that's what I think the name 
of the sponsoring organization was.) It was done by the
people at Knowledge Systems Corporation. The last time
I checked with Reed Phillips at KSC there was some 
hangup getting the University to release the tape. 


--
Greg Beary 				|  (303)889-7935
US West Advanced Technology  		|  gbeary@uswest.com	
6200 S. Quebec St.         		| 
Englewood, CO  80111			|

mitchell@community-chest.uucp (George Mitchell) (06/11/90)

In article <37560@genrad.UUCP> charlie@genrad.genrad.COM (Charlie D. Havener)
wrote:
`There have been several requests for further references on CRC cards.
`CRC stands for Class name, responsibilities, collaborators. See OOPSLA
`Proceedings 1989 pages 1-6. This is the original paper as far as I know.
`It is by Kent Beck, and Ward Cunningham and is entitled "A Laboratory
`for Teaching Object-Oriented Thinking". 
Also see pp. 71-76, "Object-Oriented Design: A Responsibility-Driven
Approach", by Rebecca Wirfs-Brock and Brian Wilkerson.

`The best description I have read is in a soon to be published text on Object
`Oriented Programming by tim budd, department of computer science,
`oregon state university, corvallis, oregon 97331. He devotes Chapter 2 to the
`method. He has used it in his course at Oregon State. I like the new
`Grady Booch text on OOD but was suprised that he made no mention of the
`CRC method.
Also nearing publication by Prentice-Hall is _Designing_
Object-Oriented_Software_ by the same authors as above.
-
/s/ George   vmail:  703/883-6029
email:  gmitchel@mitre.org    [alt: mitchell@community-chest.mitre.org]
snail:  GB Mitchell, MITRE, MS Z676, 7525 Colshire Dr, McLean, VA  22102

jimad@microsoft.UUCP (Jim ADCOCK) (06/12/90)

In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
....
>The message I get and believe is that Object Oriented analysis, design and implementation
>via a supportive languge is superior in all ways to the old structured analysis approach.
....
>....  a less sexy but more accurate name would be CADC - Computer Aided Drawing and Checking."
>
>The government loves paper - these tools help you create a monument of paper.
....

[Caution: Fuzzy, but Heretical Comments Follow]

I second your concerns about CASE -- and traditional software design techniques
in general -- where applied to OOP.

The traditional approach is structured design, top-down programming, CASE,
paper-work, etc.  The traditional approach tends to be management heavy --
the top-down factoring of the project is mirrored along the top down 
factoring of the people working on the problem.  The project follows 
traditional military aka business organization.  Flow of control is from
manager to subordinate.  Everything done is documented on paper.

I claim OOP is quite different than this.  It is not naturally a top-down
approach, but rather a topless approach -- somewhat analogous to Minsky's
Sea of Agents.  Things have a strong tendency [heavens!] to be built from
the bottom up.  This scares managers to death -- they're loosing control --
and they try to force a global top-down design on top of the object oriented
approach.  But OOP doesn't naturally have a top.  People on OOP projects
likewise tend to follow an organization similar to Minsky's Sea of Objects.
Each programmer tries to make her own decisions, based on constraints imposed
by neighboring agents towards reaching the overall goal.  Until the goal is
met, everything looks like confusion, then suddenly things snap into focus,
and an answer emerges.  Flow of control is naturally from subordinate to
manager.

Into this new emerging approach comes people trying to sell ways to make
diagrams of software.  This is somewhat analogous to diagramatic dance
notations.  The diagram is not the dance.  Eventually, you have to shut up 
and dance.

I think we do need better ways to understand the interactions between
classes.  And we need better ways to understand the interactions between
objects.  But when such emerge, they're not going to have much in 
common with structured analysis.

kelpie@tc.fluke.COM (Tony Garland) (06/12/90)

In article <37538@genrad.UUCP>, charlie@genrad.com (Charlie D. Havener) writes:
 
[ material deleted]

> The message I get and believe is that Object Oriented analysis, design and implementation
> via a supportive languge is superior in all ways to the old structured analysis approach.

    As I view it, OOD and traditional functional decomposition are two 
    valid ways of approaching a wide range of problems.  A view which
    holds that all one or all the other is the best approach assumes
    that all sizes and sorts of problems should be solved with a single
    approach.  Certainly there are some sorts of problems (especially small
    and in real-time) where an "old fashioned" functional algorithmic
    treatment might yet be the best way to go.

> all the independent authors are saying it is completely orthogonal to Object Oriented techniques

    That's not the message that I get from some of these same authors.

    I think one needs to be careful when discussing CASE.  There are
    really at least two separate aspects to CASE; the methodology and
    the tools themselves.  Each of these can be used to address a
    wide range of problems.  For instance, when writing methods for
    your OOD, are you going to totally chuck everything you learned
    about functional decomposition out the window as being outdated?
    Although the mindsets with which one approaches OOA and
    structured analysis differ significantly, some of the same
    methodologies and tools can be used to advantage with either
    one.

    For instance, state transition diagrams can be used to describe
    the states an object takes on through its lifetime, data flow
    diagrams can be used to describe its methods, and entity
    relationship diagrams certainly help specify an object's
    attributes.

 
> I've said enough. If anyone can provide a substantive rebuttal I would like to hear it.

    After having done many an "old fashioned" functional decomposition
    spec and designs (many of which actually worked ;-)) and having used
    a CASE tool recently, I know it would have made my job easier and
    less error prone.  

    While I am excited about the benefits of OOA and OOD, it appears
    that for the near future the systems I'll be working on will make
    use of a combination of OO and structured decomposition.  More OO in
    a higher-level layers like the user-interface, less in low-level 
    software control of real-time measurement hardware).  When
    it comes to the application of CASE tools, I see aspects of
    tools available today that can be used to advantage with either
    paradigm.


-----------------------------------------------------------------------------
Tony Garland  -  John Fluke Mfg. Co. Inc., P.O. Box C9090, Everett, WA  98206
kelpie@tc.fluke.COM | {uw-beaver,microsoft,sun}!fluke!kelpie | (206) 356-5268

tok@stiatl.UUCP (Terry Kane) (06/12/90)

bwb@sei.cmu.edu (Bruce Benson) writes:

>In article <5190@stpstn.UUCP> you write:
>>
>>The trouble with Computer Aided Software Engineering is that it presumes
>>the existence of such a thing as Software Engineering.
>>
>>How can robust engineering or even scientific practices ever develop in
>>a field so long as *everything* is reinvented from first principles?

>Does this failure to exercise discipline really imply a lack of engineering
>know how?  Maybe it is still just too costly for what you get out of it?

>How much has bridge building changed since the first bridge was built? Sure
>the techniques and materials have changed dramatically, but a bridge of
>today would still be recognized by a builder of centuries past.  The same
>for buildings.  

You've brought up a very good point of divergence between traditional forms
of engineering and software engineering.  SE builds tools for the manipulation
of information (most of the time - automation/process control being the
broad exception, but elec engrs. typically write that software, at least here
in Atlanta, gawdawful stuff too, but I diverge :->).  Civil Engrs can, and must
for the public good, share data/techniques and can acquire the same at little
cost.  CE's capital expenditures are for materials - they produce tangible
stuff!

SEs produce intangibles that are often expensive to reuse - that is, company
A may choose to reinvent the wheel (RTW) rather than pay royalties to a half 
dozen vendors of "software chips" for thousands of copies of 'em.  On the other
hand, Company B may only have a limited universe of customers, and so choose
to use a vendors product.  

Certainly we know from comp.risks that SEs _must_, for the public good, 
produce well engineered tools, but imho it ain't gonna be economic for some
time.  I hope not to have offended by popping into this discussion like this
with these half baked ideas, but by golly, I just have to say that software
engineering is like military intelligence etc.

Personal Opinions!
-- 
Terry Kane                                             gatech!stiatl!tok
Sales Technologies, Inc
3399 Peachtree Rd, NE
Atlanta, GA  (404) 841-4000

jcb@frisbee.Sun.COM (Jim Becker) (06/12/90)

cox@stpstn.UUCP (Brad Cox) writes:

   The trouble with Computer Aided Software  Engineering  is  that  it
   presumes the existence of such a thing as Software Engineering.


After working with CASE for a number of years, one day it dawned on me
that  *all*  software  is  developed using CASE! Ever develop software
without the aid of a computer? :-) (keypunches don't count..)


One  reason  that  hardware  is  so far ahead of software is that it's
development follows a process constrained by physical constrains.  The
CAE producers were able to leverage off the  process  and  build  more
automated development and test systems.

Because the software development process can remain  so  nebulous,  in
addition  to  there  being  multiple  paths to a correct solution, the
toughest thing about CASE is defining a system that provides an entire
solution. Thus all the partial solutions  fall  by  the  wayside  when
their limitations are touched.

It's my belief that the entire development process has to be changed.

What has happened with the Japanese Sigma  project  for  manufacturing
software? It was supposed to be out in April.

-Jim Becker
--    
	 Jim Becker / jcb%frisbee@sun.com  / Sun Microsystems

kjeld@iesd.auc.dk (Kjeld Flarup) (06/12/90)

In article <55137@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes:
>The traditional approach is structured design, top-down programming, CASE,
>paper-work, etc.  The traditional approach tends to be management heavy --
>the top-down factoring of the project is mirrored along the top down 
>factoring of the people working on the problem.  The project follows 
>traditional military aka business organization.  Flow of control is from
>manager to subordinate.  Everything done is documented on paper.
>
>I claim OOP is quite different than this.  It is not naturally a top-down
>approach, but rather a topless approach -- somewhat analogous to Minsky's
>Sea of Agents.  Things have a strong tendency [heavens!] to be built from
>the bottom up.  This scares managers to death -- they're loosing control --
>and they try to force a global top-down design on top of the object oriented
>approach.  But OOP doesn't naturally have a top.  People on OOP projects
>likewise tend to follow an organization similar to Minsky's Sea of Objects.
>Each programmer tries to make her own decisions, based on constraints imposed
>by neighboring agents towards reaching the overall goal.  Until the goal is
>met, everything looks like confusion, then suddenly things snap into focus,
>and an answer emerges.  Flow of control is naturally from subordinate to
>manager.
...
>I think we do need better ways to understand the interactions between
>classes.  And we need better ways to understand the interactions between
>objects.  But when such emerge, they're not going to have much in 
>common with structured analysis.

What we need is a rational way to document object-oriented dsigns. If
we got a lot of object, which can interact in a lot of ways, then how
is it possible for two software engineers to get the same
understanding of the system. The one may be the designer, and the
other may be the maintainer, and the may never meet.

Now if the designer wants to document his system for later
maintaineance. He should use a terminologi that the maintainer can
understand. The key question here is not how to describe a object. If
OO just was a question of pushing a ahndle and then something
happened, then I do not see much difference between top down and OO.
But what are the sideeffects of pushing a handle, where does the
control go. It is actually possible that all objects may be affected
before control returns to you. 

So along with a better understanding of objects it is also nessecary
to devellop a way to document OO systems. Personally I work with CASE,
but at home when doing some programming for my self I use OO. Why the
difference ? The programs I write at home is my own, and there is only
my self to maintain them later. Then using OO gives me an advantage.
But when working on larger systems requireing more people, I need to
use a mechanism to secure my documentatiuon and the possibility to
maintain the system later.
-- 
*     I am several thousand pages behind my reading schedule.    *
Kjeld Flarup Christensen                         kjeld@iesd.auc.dk

crm@romeo.cs.duke.edu (Charlie Martin) (06/12/90)

In article <5190@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:
>The trouble with Computer Aided Software Engineering is that it presumes
>the existence of such a thing as Software Engineering.
>

Well SAID, Brad!
Charlie Martin (...!mcnc!duke!crm, crm@summanulla.mc.duke.edu)
O: NBSR/One University Place/Suite 250/Durham, NC  27707/919-490-1966
H: 13 Gorham Place/Durham, NC 27705/919-383-2256

mcgregor@hemlock.Atherton.COM (Scott McGregor) (06/13/90)

>>   The trouble with Computer Aided Software  Engineering  is  that  it
>>   presumes the existence of such a thing as Software Engineering.

>After working with CASE for a number of years, one day it dawned on me
>that  *all*  software  is  developed using CASE! Ever develop software
>without the aid of a computer? :-) (keypunches don't count..)

I agree with Jim Becker; CASE is a multifacetted term and means much
more than simple
SA/SD drawing tools.  Software engineering is the making of tradeoffs to
produce a program
that satisfies minimum expectations of those people who directly or
indirectly will pay to use the program.
(I use SATISFIES, as opposed to OPTIMIZE quite intentionally.  And it is
the unwritten expectations
not the written ones that are the really critical ones.  I derive these
principles from Herb Simon's
micro-economic model of "satisficing behavior").

In CASE, it is "Software" because that is the structural material from
which solutions are built.
It is "Engineering" because it involves the some of the same sort of
complexity management
tradeoffs as one might see in Industrial Engineering or other
Engineering disciplines.
Of course that doesn't mean that the techniques and principles of safe
design are as mature
as in these other disciplines, just that the required tradeoff analysis
is comparable.
Computer Aided simply means that "computers" are used to more easily
develop software
that satisfies the requirements. As Jim Becker points out in his reply
to Cox above, we mostly
do use computers to develop software.  So I think that CASE is a good
thing, but frequently much maligned because so many tools marketted as
"CASE" under deliver on their computer aided promise.  A
similar situation has occurred in "Software Reuse". I think that in fact
a considerable amount of software
development is done WITH large amounts of "software reuse".  However,
the libraries being reused
are the programmer's own past programs.  Many of the "most productive"
programmers are those with
the most extensive private libraries, typically the most senior
programmers.  But when those programmers
are moved to a new OS, and new programming language they usually get hit
hard with a learning curve,
which is really the re-generation of a critical mass of self-written
utility routines in thier
new OS and programming libraries.

So the problem of software reuse may not really be the problem of
getting people to reuse code--they
do that already--the problem may be getting them to reuse other peoples
code, and in *motivating* them
to search and understand for other people's reusable routines.  I've had
plenty to say on this topic
before, especially with respect to a historical situation I observed
concerning a human technical librarian who maintained a software
library, so I won't repeat that again unless there are further requests.

But, I think there is an analagous problem for CASE tools. I think that
tools are possible that do considerably more to aid software development
than most tools marketted as CASE today.   I would say that some of the
integrated program development environments, such as LightSpeed C, some
APSEs, and even the EMACS Lisp development environment are substantial
contributions to computer aided software development over 
just separate editors, and compilers and linkers that many people still
develop with.  

However, if we are to really address COMPUTER AIDED software
development, I think we should re-examine what it is about humans that
needs aiding, and what computers do better that might be used to aid the
humans in their weak areas.  In my previous articles and talks on use of
"Prescient Agents" in CASE, I have mentioned three areas that human
capabilites could benefit from computer aid:

	1) Augment Human Memory.   Human short term memory is extremely
limited--typically 7 plus or
	   minus 2 "chuncks" of associative memory seem to be all that most of
us have.  This makes
	   it hard to manage complex tasks that require us to constantly
remember many items and
	   relationships.  Human long term memory is subject to total loss as
well as corruption.
	   In pre-computer times many techniques were developed to help
remember multiple things.
	   One of the most successful was reducing a memory to writing and
keeping the writing 
	   around.  From this the publishing industry developed.  In the
computer world the 
	   analogous memory augmentation tool is the database or repository. 
SA/SD tools are
	   an attempt to provide one means of reducing some thoughts about
system structure to
	   a more permanent written material.

	2) Improve Human Communication.  Human communication is frequently at
low baud rate.
	   High bandwidth media like video or good written communication takes
a long time
	   to create.  Voice medium is cheaper to create but carries less
information in a
	   unit time, and is harder to control from the recipient's standpoint.
 Because of
	   this low bandwidth, we often undercommunicate, hoping or assuming
that recipients
	   of our communications will be able interpret our communication much
more richly by
	   applying a presumed shared context.  (This problem of communication
presuming a
	   shared context is well known in knowlege engineering and is the
focus of the
	   very intriguing CYC project at MCC).  If software developers are
using their computers
	   to do much of their work, many aspects of their context are implicit
in the spatial and
 	   temporal relationships between their work artifacts.  Because
people often undercommunicate
	   and over-assume that a full communication has occurred, we
frequently note that things
           "fall through the cracks".   Fred Brook's dictim that adding
more programmers
	   to a late project makes it later, and similar observations about the
impossibility of
	   doing good programs as the number of people involved increase due to
the exploding
	   inter-person communication problem are well known and traceable to
this area.
	   Because these computer can have a more complete capturing of
environmental clues and context	
	   (assuming its more permanent memory is used), when it mediates a
communication, it can also 		
	   ensure a more complete transmission of context as well.  This is one
of the focusses of
	   prescient agents.  Computer mediated structured and semi-structured
discourse (e.g. GIbis,
	   Information Lens, the Coordinator...) are also attempts to aid
humans in this arena.
	
	3) Enhance Human Reasoning. Humans are not good at long repetative
calculations where 
	   precision is required.  Linear programs and similar techniques are
one way of improving
	   human reasoning.  But attention focus management and other user
interfacte techniques that
	   help the user stay focussed on their tasks and not on their tools,
which manage memory
	   and mediate communication (a la Doug Engelbart's NLS and Augment)
are also important here.

I believe that CASE tools could go much further in the above direction
than the have today.  Much of
the benefits in ME CAD and EE CAD tools are in their improved
capabilities in the above three arenas
rather than in their diagramatic methodologies, etc. which were
frequently well established far before
computers played much of a part.

Scott L. McGregor
mcgregor@atherton.com
 

warren@eecs.cs.pdx.edu (Warren Harrison) (06/13/90)

In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
>There has been a lot of talk about CASE tools in this group lately.
>The questions seem to be where can I get one and how is Brand X
>slightly better than Brand Y. It seems to me the real question is
>'Are Structured Design CASE tools worth investing time, effort and money in?'
>I have tentatively formed my decision. The answer is NO!
>
> [some remarks about object oriented programming/design being better
>  than structured analysis]

Some of the tools the poster mentioned (eg, EASYCASE) support other
things besides structured design - for example entity-relationship
modelling. Likewise, such CASE tools can be used to develop a model
of the system (not necessarily the solution) - for example checks
come in here with coupons, checks flow to the bank, coupons flow to
the order desk who then send order req's on to the warehouse, etc.
Such models can help the customer identify missing flows or misunder-
standings between the parties, as well as forming a basis for a
requirements document (for example, a finite machine describing the
behavior of the elevator you're supposed to write software to control).
Also many CASE tools provide the ability to do requirements tracing
which is useful whether you're using OOP/OOD or not.

The "real question" as the poster puts it should be 'are CASE tools
worth the time and effort if I want to do xxx'

Warren

==========================================================================
Warren Harrison                                          warren@cs.pdx.edu
Department of Computer Science                                503/725-3108
Portland State University   

sharam@munnari.oz.au (Sharam Hekmatpour) (06/13/90)

In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
>There has been a lot of talk about CASE tools in this group lately.
>The questions seem to be where can I get one and how is Brand X
>slightly better than Brand Y. It seems to me the real question is
>'Are Structured Design CASE tools worth investing time, effort and money in?'
>I have tentatively formed my decision. The answer is NO!

I agree with you...

I have spent close to 2 years now looking at some CASE tools. The ones I
have seen so far (and I persume the rest are not any different) suffer from
this silly formula: SA + SD = SE.

When are CASE tool developers going to wake up to the fact that SD is a
waste of time? Let's see some CASE support for OOD.

Sharam Hekmatpour

zarnuk@caen.engin.umich.edu (06/13/90)

>>(Jim Adcock) rants :-) 
>>  ... The traditional approach tends to be management heavy --

But that's exactly the goal of Software Engineering -- to make the
software development process more MANAGABLE.  How can management
make intelligent decisions about allocating resources if nobody
can answer the most fundamental questions about what resources
will be necessary and what benefits can be expected for/from a
software development project?  

The traditional approach is management heavy, because that is where
the real problems are in software development.  The problems with
software development are not technical, they are managerial (or 
sadly enough, political).   

However, I'd say that the scope of most CASE tools and Software
Engineering methodologies in general place far too much emphasis
on the actual development of software to the shameful neglect of 
MANAGING the process.  Little or no attention is paid to deciding
which projects to pursue, or calculating the cost/benefits of 
pursuing/abandoning projects.  There is certainly not enough 
attention paid to TRAINING MANAGEMENT to make decisions that 
will promote the strategic goals of an organization.  

There are still problems with developing software, but if management
is willing to COMMIT themselves to pursuing the projects that will
promote their organization -- anything can be built.  Things may 
get rocky in the design and development arena occasionally, but they
get solved with patience and plodding.  When management is perpetually
pursuing the "instant buck" and flipping and flopping between which
projects they have a "hard on" for today -- nothing gets built.  


>>  ... Everything done is documented on paper.

Ever been stuck trying to maintain undocumented systems?  


>>I claim OOP is quite different than this.  It is not naturally a top-down
>>approach, but rather a topless approach ...

I agree that OOP is not amenable to the standard Top-Down Structured Design
approach.  I also believe that OOP is "the next step" to generating higher
quality code more quickly, but saying that OOP is topless is nothing more
than saying that you are an anarchist (no prejorative intended).  When I 
design OOP systems, I start out by identifying objects and their various
domains of responsibility.  The boundaries "float" somewhat once development
gets under way, but the project starts out with a design!  Yes, development
proceeds in a "bottom-up" manner, but there's nothing new in that -- many
developers design "top-down" then implement "bottom-up".  Ultimately, even
OO systems require some action-oriented control sections (notably at the
top!).  

---Paul...

pyoung@axion.bt.co.uk (Pete Young) (06/13/90)

From article <7486@fy.sei.cmu.edu>, by bwb@sei.cmu.edu (Bruce Benson):
> In article <5190@stpstn.UUCP> you write:

> How much has bridge building changed since the first bridge was built? Sure
> the techniques and materials have changed dramatically, but a bridge of
> today would still be recognized by a builder of centuries past.  The same
> for buildings.  

This argument also applies to software at the ultimate (machine)
level.
Programmers of early machines would probably recognise the output from
a modern compiler as a program, although they wouldn't be able to say
too much about how it was designed.
This is not intended as a personal assault on Bruce Benson, but I
can't help wondering if building bridges is an appropriate analogy for
what we are doing.

> In software we seem to always push the limit, it gets larger and more
> complex - always at the limit of what we can do.  Doesn't a 4GL that 
> generates database update and report applications represent the bridge 
> construct?  The better we understand (and do not vary) the application
> the better we get at generating it.

This is quite true, but modern bridge designers are also working at
the limits of what they can do. Cost is also a factor - modern
engineering is about building the cheapest bridge (say) that *just*
doesn't fall down. Occasionally they get it wrong even after all these
years.

> Isn't what we are trying to achieve with Software Engineering simply a 
> restricted subset of the GPS (General Problem Solving) Algorithms of the
> past decades?  Are we not trying to create a general problem solving (and
> construction) algorithm by trying to find an effective software engineering
> method that works for everything?

Yes, I think this is true. I also wonder if it is always sensible. The
required properties of a database are as well known as the properties
for a bridge. The bridge designer is able to follow a set of rules to
prove that his design satisfies the required properties. This is the
point where traditional engineering and software engineering diverges.
Until we have developed methods of calculating programs from
specifications (and this day is not as remote as you might think!) we
are not an engineering discipline in the traditional sense.

What I am trying to say is that where the problem is well-defined, we
should be in a position to derive a set of rules specific to the
problem. That way engineering principles could be introduced in
certain cases without having to wait for general purpose solutions to
be developed.
 
Pete

  ____________________________________________________________________
  Pete Young         pyoung@axion.bt.co.uk        Phone +44 473 645054
  British Telecom Research Labs,SSTF, Martlesham Heath IPSWICH IP5 7RE

mcgregor@hemlock.Atherton.COM (Scott McGregor) (06/13/90)

I have discovered that my editing window size had accidentally been
changed before my last posting, contributing to it being difficult to read.
I wish to apologize for this, and I have attached a fixed version of my 
posting.  Sorry ---Scott

>>   The trouble with Computer Aided Software  Engineering  is  that  it
>>   presumes the existence of such a thing as Software Engineering.
 
>After working with CASE for a number of years, one day it dawned on me
>that  *all*  software  is  developed using CASE! Ever develop software
>without the aid of a computer? :-) (keypunches don't count..)
 
I agree with Jim Becker; CASE is a multifacetted term and means much
more than simple SA/SD drawing tools.  Software engineering is the
making of tradeoffs to produce a program that satisfies minimum
expectations of those people who directly or indirectly will pay to use
the program. (I use SATISFIES, as opposed to OPTIMIZE quite
intentionally.  And it is the unwritten expectations not the written
ones that are the really critical ones.  I derive these principles from
Herb Simon's micro-economic model of "satisficing behavior").

In CASE, it is "Software" because that is the structural material from
which solutions are built. It is "Engineering" because it involves the
some of the same sort of complexity management  tradeoffs as one might
see in Industrial Engineering or other Engineering disciplines. Of
course that doesn't mean that the techniques and principles of safe
design are as mature
as in these other disciplines, just that the required tradeoff analysis
is comparable. Computer Aided simply means that "computers" are used to
more easily develop software that satisfies the requirements. As Jim
Becker points out in his reply to Brad Cox above, we mostly do use
computers to develop software.  So I think that CASE is a good thing,
but frequently much maligned because so many tools marketted as "CASE"
under deliver on their computer aided promise.  A similar situation has
occurred in "Software Reuse". I think that in fact a considerable amount
of software development is done WITH large amounts of "software reuse". 
However, the libraries being reused are the programmer's own past
programs.  Many of the "most productive" programmers are those with the
most extensive private libraries, typically the most senior programmers.
 But when those programmers
are moved to a new OS, and new programming language they usually get hit
hard with a learning curve, which is really the re-generation of a
critical mass of self-written utility routines in their new OS and
programming libraries.
 
So the problem of software reuse may not really be the problem of
getting people to reuse code--they do that already--the problem may be
getting them to reuse other peoples code, and in *motivating* them
to search and understand for other people's reusable routines.  I've had
plenty to say on this topic before, especially with respect to a
historical situation I observed concerning a human technical librarian
who maintained a software library, so I won't repeat that again unless
there are further requests.
 
But, I think there is an analagous problem for CASE tools. I think that
tools are possible that do considerably more to aid software development
than most tools marketted as CASE today.   I would say that some of the
integrated program development environments, such as LightSpeed C, some
APSEs, and even the EMACS Lisp development environment are substantial
contributions to computer aided software development over just separate
editors, and compilers and linkers that many people still develop with.  
 
However, if we are to really address COMPUTER AIDED software
development, I think we should re-examine what it is about humans that
needs aiding, and what computers do better that might be used to aid the
humans in their weak areas.  In my previous articles and talks on use of
"Prescient Agents" in CASE, I have mentioned three areas that human
capabilites could benefit from computer aid:

	1) Augment Human Memory.   Human short term memory is extremely 	  
limited--typically 7 plus or minus 2 "chuncks" of associative memory
seem to be all that most of us have.  This makes it hard to manage
complex tasks that require us to constantly remember many items and
relationships.  Human long term memory is subject to total loss as well
as corruption. 	   In pre-computer times many techniques were developed to help
remember multiple things.  One of the most successful was reducing a
memory to writing and keeping the writing around.  From this the
publishing industry developed.  In the computer world the analogous
memory augmentation tool is the database or repository. SA/SD tools are
an attempt to provide one means of reducing some thoughts about system
structure to a more permanent written material.

	2) Improve Human Communication.  Human communication is frequently at
low baud rate. High bandwidth media like video or good written
communication takes a long time to create.  Voice medium is cheaper to
create but carries less information in a unit time, and is harder to
control from the recipient's standpoint. Because of this low bandwidth,
we often undercommunicate, hoping or assuming that recipients of our
communications will be able interpret our communication much more richly
by applying a presumed shared context.  (This problem of communication
presuming a 	   shared context is well known in knowlege engineering and
is the focus of the very intriguing CYC project at MCC).  If software
developers are using their computers to do much of their work, many
aspects of their context are implicit in the spatial and temporal
relationships between their work artifacts.  Because people often
undercommunicate and over-assume that a full communication has occurred,
we frequently note that things "fall through the cracks".   Fred Brook's
dictim that adding more programmers
to a late project makes it later, and similar observations about the
impossibility of doing good programs as the number of people involved
increase due to the exploding inter-person communication problem are
well known and traceable to this area. Because these computer can have a
more complete capturing of environmental clues and context (assuming its
more permanent memory is used), when it mediates a communication, it can
also 		ensure a more complete transmission of context as well.  This is one
of the focusses of prescient agents.  Computer mediated structured and
semi-structured discourse (e.g. GIbis, Information Lens, the
Coordinator...) are also attempts to aid humans in this arena.
	
	3) Enhance Human Reasoning. Humans are not good at long repetative
calculations where precision is required.  Linear programs and similar
techniques are one way of improving human reasoning.  But attention
focus management and other user interface techniques that help the user
stay focussed on their tasks and not on their tools, which manage memory
and mediate communication (a la Doug Engelbart's NLS and Augment) are
also important here. 


I believe that CASE tools could go much further in the above direction
than the have today.  Much of the benefits in ME CAD and EE CAD tools
are in their improved capabilities in the above three arenas rather than
in their diagramatic methodologies, etc. which were frequently well
established far before computers played much of a part.

Scott L. McGregor
mcgregor@atherton.com
 

bwb@sei.cmu.edu (Bruce Benson) (06/14/90)

In article <1990Jun13.101122.15604@axion.bt.co.uk> pyoung@zaphod.axion.bt.co.uk writes:
>From article <7486@fy.sei.cmu.edu>, by bwb@sei.cmu.edu (Bruce Benson):
>> How much has bridge building changed since the first bridge was built? Sure
>> the techniques and materials have changed dramatically, but a bridge of
>> today would still be recognized by a builder of centuries past.  The same
>> for buildings.  
>This is not intended as a personal assault on Bruce Benson, but I

Not to worry ;-).

>can't help wondering if building bridges is an appropriate analogy for
>what we are doing.

The point I was trying to illustrate is that our intuitive concept of
engineering comes from such mundane, *mastered* efforts such as building
a bridge.  We are confident when we plan to build a bridge that the bridge
will work.  We are looking for that same type of confidence when we plan to
build something with software.  *Engineering* is the word and reference
that may be inappropriate since it brings so much baggage in meaning from
the traditional disciplines. 

We want to be as confident as a bridge builder, but we are not building
bridges.  We don't rebuild the same word processor over and over again.
Instead we upgrade or build *new or novel* capabilities with each new word
processor.

>required properties of a database are as well known as the properties
>for a bridge. The bridge designer is able to follow a set of rules to
>prove that his design satisfies the required properties. This is the
>point where traditional engineering and software engineering diverges.

Putting it another way, don't we see domain specific technology
(database in particular) becoming available that moves us away from having
to *prove that his design satisfies* or maybe reducing drastically what
has to be proven in creating the object (bridge or database)?  Mundane
databases are being created using dBase or Informix instead of C or
Pascal.  The house builder uses 2 by 4s not 3 by 5s to build with.  More
importantly the builder uses *standard* *enabling* processes and
technology to build with.  The standard problems (databases) in a given
context (personal computers) we have *solved* to our current level of
expertise.

>Until we have developed methods of calculating programs from
>specifications (and this day is not as remote as you might think!) we
>are not an engineering discipline in the traditional sense.

Won't moving the *programming* up a Level (levels?) of abstraction only
move the problems up to a higher level?  Agreed, this is where we are
going, and have gone (machine code->assembler->HOL->4GL->CASE), and is
what we are striving to do in the SE field.  But if the conjecture that
we always push our limits is valid, won't the same SE problems
manifest in the activity of specification?  Can a specification language
capture and characterize *human* creative ideas any better than machine
code?  

>What I am trying to say is that where the problem is well-defined, we
>should be in a position to derive a set of rules specific to the
>problem. That way engineering principles could be introduced in
>certain cases without having to wait for general purpose solutions to
>be developed.

I would suspect that if a problem is well defined and has a space of well
defined solutions, we probably have done this.  Can you illustrate?

The two central questions were:
  1.  Are we not mastering software engineering a lot more than we admit?
  2.  Are we not confusing bad-results-from-pushing-the-outer-limit with
      mastery-of-software-engineering-in-things-we-know?

Bruce

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8496    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

itcp@praxis.co.uk (Tom Parke) (06/14/90)

jimad@microsoft.UUCP (Jim ADCOCK) writes:

>Into this new emerging approach comes people trying to sell ways to make
>diagrams of software.  This is somewhat analogous to diagramatic dance
>notations.  The diagram is not the dance.  Eventually, you have to shut up 
>and dance.

Speaking as a dancer you might like to know that video has largely
replaced notation as a means of recording choreography, its faster,
cheaper and captures more. Also note that dance notation is for use
after the dance is created. Now, how do we video software ?

	Tom

jkrueger@dgis.dtic.dla.mil (Jon) (06/14/90)

bwb@sei.cmu.edu (Bruce Benson) writes:

>Doesn't a 4GL that generates database update and report applications
>represent the bridge construct?

No.

Or should I ask, bridge between what and what?
And what's a `4GL'?

>Isn't what we are trying to achieve with Software Engineering simply a 
>restricted subset of the GPS (General Problem Solving) Algorithms of the
>past decades?

No.

>Are we not trying to create a general problem solving (and
>construction) algorithm by trying to find an effective software engineering
>method that works for everything?

No.

>Are we not trying to solve problems we have not yet defined?

Yes.  You, for instance, haven't defined the problems to which you
refer in your article.


-- Jon
-- 
Jonathan Krueger    jkrueger@dtic.dla.mil   uunet!dgis!jkrueger
Drop in next time you're in the tri-planet area!

jimad@microsoft.UUCP (Jim ADCOCK) (06/15/90)

In article <1990Jun13.070649.14929@caen.engin.umich.edu> zarnuk@caen.engin.umich.edu writes:
|
|>>(Jim Adcock) rants :-) 

I wasn't ranting, I was being heretical.  I try to contain my madness in a 
quiet sort of way...

|>>  ... The traditional approach tends to be management heavy --
|
|But that's exactly the goal of Software Engineering -- to make the
|software development process more MANAGABLE.  How can management
|make intelligent decisions about allocating resources if nobody
|can answer the most fundamental questions about what resources
|will be necessary and what benefits can be expected for/from a
|software development project?  

This paragraph makes all kinds of assumptions: 1) software engineering
makes the software development process more managable 2) being more managable
is good 3) management can make intelligent decisions if people could answer
some fundamental questions 4) software engineering helps answer those 
fundamental questions.

My last dozen years in the software industry contradicts these assumptions.
Fundamentally, what ultimately matters is the final success or failure of 
a project, not how managable or predictable it is.  Make enough money off
a project and nothing else matters.  Make no money off a project and nothing
else matters.  Most projects [in my experience] tend to fall towards one or
the other of these extremes.  Managability only matter on projects that fall
in the middle between these two extremes.

|The traditional approach is management heavy, because that is where
|the real problems are in software development.  The problems with
|software development are not technical, they are managerial (or 
|sadly enough, political).   

I agree with your problem statement, however I believe we are disagreeing
on the ordering of cause and effect here.

|However, I'd say that the scope of most CASE tools and Software
|Engineering methodologies in general place far too much emphasis
|on the actual development of software to the shameful neglect of 
|MANAGING the process.  Little or no attention is paid to deciding
|which projects to pursue, or calculating the cost/benefits of 
|pursuing/abandoning projects.  There is certainly not enough 
|attention paid to TRAINING MANAGEMENT to make decisions that 
|will promote the strategic goals of an organization.  

Agreed, except I believe companies would do well also to direct that 
strategic decision making training down to the lowest levels of a company.
To be able to do this, companies have to trust low level managers, and
lowest level employees, to work for the good of the company, and give those
people a fair margin of security.  The problem becomes that low level employees
and managers can't say: "Hey, what we're doing here makes no sense, lets do
something else."  Because they will be dinged and/or fired for it.

|There are still problems with developing software, but if management
|is willing to COMMIT themselves to pursuing the projects that will
|promote their organization -- anything can be built.  Things may 
|get rocky in the design and development arena occasionally, but they
|get solved with patience and plodding.  When management is perpetually
|pursuing the "instant buck" and flipping and flopping between which
|projects they have a "hard on" for today -- nothing gets built.  

Agreed.  But we better be willing to admit that the amount of time and
money necessary to pursue these longer term projects is very unpredictable.
Unpredictability need not be bad.  A basketball team can start out losing,
lose keys players, have their prime draft choice turn into a flake, and still
go on to win.  Its the final results that count.

|>>  ... Everything done is documented on paper.
|
|Ever been stuck trying to maintain undocumented systems?  

Yes, and its hellish.  So the trick is to spend money carefully, putting 
documentation effort into areas where it counts -- such as documenting
what is necessary for maintainability -- or better yet, writing code
for maintainability in the first place, by striving for strict encapsulation
of design.  Trying to document *everything* is destructive and dissapative --
no project can afford it, and the effort leads from project delays
to eventual project collapse.

|>>I claim OOP is quite different than this.  It is not naturally a top-down
|>>approach, but rather a topless approach ...
|
|I agree that OOP is not amenable to the standard Top-Down Structured Design
|approach.  I also believe that OOP is "the next step" to generating higher
|quality code more quickly, but saying that OOP is topless is nothing more
|than saying that you are an anarchist (no prejorative intended).  When I 
|design OOP systems, I start out by identifying objects and their various
|domains of responsibility.  The boundaries "float" somewhat once development
|gets under way, but the project starts out with a design!  Yes, development
|proceeds in a "bottom-up" manner, but there's nothing new in that -- many
|developers design "top-down" then implement "bottom-up".  Ultimately, even
|OO systems require some action-oriented control sections (notably at the
|top!).  

I disagree, in general.  Toplessness need not be anarchy.  It just requires
good citizenship, and good team players.  When people carefully craft classes
to match a particular design hierarchy, then they end up with classes that
are not reusable -- they automatically get "tuned" for their present place
in the design hierarchy.  Ultimately, then, top-down designed code becomes
throw-away code -- the design is just too immutable.  As we head towards
more and more complicated software, concepts of "top", "bottom", "edge" 
code, etc, become meaningless -- because the vast majority of software
developers are in the middle of the project somewhere.  These developers
have to do their design and coding based on the requirements and constraints
of their neighboring programmers, people who are producers of classes this
class needs, or consumers of this class.  The important thing is to strive
to encapsulate each design, so that each software developer only has a half
dozen interrelated producer/consumers to interact with.  When you don't have
design encapsulation, but rather have dozens of people whose needs/constraints
you must meet, then life becomes hell.

dennism@ingres.com (Dennis Moore) (06/15/90)

In article <814@sagpd1.UUCP> jharkins@sagpd1.UUCP (Jim Harkins) writes:
>In article <37538@genrad.UUCP> charlie@genrad.com (Charlie D. Havener) writes:
>>'Are Structured Design CASE tools worth investing time, effort and money in?'
>>I have tentatively formed my decision. The answer is NO!
>
>I second the motion, they have nowhere near the power and flexibility of UNIX
>or even DOS.

Huh???

Do you drive to work or do you bring your lunch?

Look, I learned to program on a TRASH-80 with cassette drive, an HP1000 with
paper tape, and an IBM 360 with punched cards, in 1977.  I love to hack as
much as the next hacker.  Now, I'm forced to reexamine my prejudices against
CASE, as I am the CASE Products development manager here at INGRES Corp.
(CAVEAT -- THIS IS NOT AN OFFICIAL STATEMENT OF INGRES CORPORATION!!!).

I have recently (i.e. last year through now) read extensively on CASE
productivity studies.  Several organizations, such as the Software Engineering
Institute (SEI) at Carnegie Mellon U. have published studies proving that,
for at least some types of projects with some types of employees in some types
of organizations, SADT tools increase productivity.

Show me a single study which contradicts this -- and I mean that shows that
under no circumstances does SADT help over and above simply getting down and
coding.  OOA or OOD may in fact help more;  this I am not arguing.

Finally, before writing off the government's documentation requirements as
busywork invented by frightened bureaucrats, think a little further.  For one
thing, to create the required paperwork, you have to do the work on which the
paperwork is based.  You must create test plans and designs.

Why does the government require this?  Well, I'm sure that your average
MIS program, as again documented by many studies, has 10 to 30 bugs per KLOC
(by some reckonings -- I don't have an attribution handy).  The average
control program for a fighter jet has substantially fewer -- and I bet the
pilot is pretty happy about that.

-- Dennis Moore, my own opinions, blahblahblah

wags@ann-arbor.cimage.com (Bill Wagner/1000000) (06/15/90)

In article <7527@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:
>In article <1990Jun13.101122.15604@axion.bt.co.uk> pyoung@zaphod.axion.bt.co.uk writes:

	[various discussions on bridge analogy deleted]

>We want to be as confident as a bridge builder, but we are not building
>bridges.  We don't rebuild the same word processor over and over again.
>Instead we upgrade or build *new or novel* capabilities with each new word
>processor.

This contains a point I think is very important when discussing our toolset.
Rather than building a bridge, we continually build *and then upgrade, 
enhance, and re-design* our word processors, etc.

My biggest concert with CASE, or any other set of tools is its ability to 
help me (and my co-workers) mold a previous product into a new one.  As has
been stated by others, software is never really thrown out, but continually
modified to suit a new purpose.

Bill Wagner
"The opinions stated are mine and mine alone.  My employers opinion is that
I should quit reading news and get back to work"

jacob@latcs1.oz.au (Jacob L. Cybulski) (06/15/90)

I think most of the participants of this discussion forget one thing, that
Programming Environments, CASE Tools, Methodology Drivers and CASE
Environments should not be confused with each other (I think the distinction
was made by McClure). The programming environments usually integrate a number
of programming language oriented utilities, e.g. syntax oriented editors,
debuggers, steppers and tracers, etc., into a uniform framework (e.g.
Interlisp OR THINK C), CASE Tools usually target the individual programming
tasks thus improving overall programming productivity (e.g. ER-DESIGNER),
Methodology Drivers integrate the SE Tools to follow a particular
methodology (e.g. IEF, HOS, Excelerator, VAW, etc.), and then you have CASE
Environments which can be customised to follow any (or a few) methodologies
(e.g. VSF, POSE, etc.).

The systems that people so eagerly criticise happen to be SA/SD Methodology
Drivers and incorporate a number of CASE Tools supporting the techniques
accepted as a part of that methodology. I cannot see any reasons to knock
down a particular software development methodology just because a new
programming flavor appears on the market (e.g. OOP). Hundreds of companies
adopted SA/SD and invested hundreds of thousands to introduce standards and
train their staff, if an appropriate CASE tool, environment or driver
reduces their costs, we should prize the tool designers and the management
decision to adopt it. If you are an OOP fan, which means it is unlikely
you'd use COBOL as your programming vehicle, then perhaps you should use the
tools which support your activities (e.g. StP for Ada and C++).

However, a true Software Engineer must look at software development as a
process involving not only programmers (and their likes and dislikes for a
particular method or language), but also the clients, organisational needs,
costs and benefits, project structure and programming team, requirement
specification, design, implementation, testing and finally maintenance. Your
favorite CASE environment (e.g. OOP) may not provide an extensive support for
ALL elements of software development lifecycle, would you then consider
changing your mind and adopting the tool which reduces your costs overall, or
would you develop your own and committ your employer to millions (CASE
development costs goes into hundreds of man-years). I would.

Jacob L. Cybulski

Amdahl Australian Intelligent Tools Program
Department of Computer Science
La Trobe University
Budoora, Vic 3083

Phone: +613 479 1270
Fax:   +613 470 4915
Telex: AA 33143
EMail: jacob@latcs1.oz

kelpie@tc.fluke.COM (Tony Garland) (06/15/90)

In article <881@dgis.dtic.dla.mil>, jkrueger@dgis.dtic.dla.mil (Jon) writes:
> bwb@sei.cmu.edu (Bruce Benson) writes:
> 
> >Doesn't a 4GL that generates database update and report applications
> >represent the bridge construct?
> 
> No.
> 
> Or should I ask, bridge between what and what?
> And what's a `4GL'?
> 

    Scientific thought at work.  Answer the question first, then try
    to understand it. ;-(

davidm@uunet.UU.NET (David S. Masterson) (06/16/90)

In article <55235@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes:

   I disagree, in general.  Toplessness need not be anarchy.  It just requires
   good citizenship, and good team players.  When people carefully craft
   classes to match a particular design hierarchy, then they end up with
   classes that are not reusable -- they automatically get "tuned" for their
   present place in the design hierarchy.  Ultimately, then, top-down designed
   code becomes throw-away code -- the design is just too immutable.  As we
   head towards more and more complicated software, concepts of "top",
   "bottom", "edge" code, etc, become meaningless -- because the vast majority
   of software developers are in the middle of the project somewhere.  These
   developers have to do their design and coding based on the requirements and
   constraints of their neighboring programmers, people who are producers of
   classes this class needs, or consumers of this class.  The important thing
   is to strive to encapsulate each design, so that each software developer
   only has a half dozen interrelated producer/consumers to interact with.
   When you don't have design encapsulation, but rather have dozens of people
   whose needs/constraints you must meet, then life becomes hell.

Now I'm interested...

Where do you start the process of encapsulation?  Is this not a "top-down"
approach -- namely breaking things up into "encapsulatable" pieces?  Agreed,
this process doesn't travel all the way down to the lowest level pieces, but I
contend that it does travel from the top until it reaches a point where the
pieces are "manageable".
--
===================================================================
David Masterson					Consilium, Inc.
uunet!cimshop!davidm				Mt. View, CA  94043
===================================================================
"If someone thinks they know what I said, then I didn't say it!"

jimad@microsoft.UUCP (Jim ADCOCK) (06/16/90)

In article <5136@newton.praxis.co.uk> itcp@praxis.co.uk (Tom Parke) writes:

>Speaking as a dancer you might like to know that video has largely
>replaced notation as a means of recording choreography, its faster,
>cheaper and captures more. Also note that dance notation is for use
>after the dance is created. Now, how do we video software ?

Yes!  Video captures a two dimensional view of what is actually happening
as a function of time, so that people can go back and analyze it at their
leisure.

As oppose to diagrams, that capture a two dimensional view at one point in
time, and are out of date at any other point in time.

Clearly, in real world software projects that are getting anywhere, lots of
things are happening constantly, so that any "video" recording scheme is 
going to have to compress the changes happening into some kind of shorthand
notation.

Examples of this are change tracking schemes, that make summaries of what
classes have been changing, and why.  As opposed to classes that are remaining
stable.

One could imagine story boards that track successive changes in class hierachy,
or over a more limited scope changes in the interactions between objects.
On a CRT one could "flip" between successive story boards to get a simple
animated display of how things have been changing....

mcgregor@hemlock.Atherton.COM (Scott McGregor) (06/16/90)

In article <55235@microsoft.UUCP>, jimad@microsoft.UUCP (Jim ADCOCK) writes:

> My last dozen years in the software industry contradicts these assumptions.
> Fundamentally, what ultimately matters is the final success or failure of 
> a project, not how managable or predictable it is.  Make enough money off
> a project and nothing else matters.  Make no money off a project and nothing
> else matters.  Most projects [in my experience] tend to fall towards one or
> the other of these extremes.  Managability only matter on projects that fall
> in the middle between these two extremes.

I agree with you about what *ultimately matters*.  However, many people feel
the need to regularly re-examine their expected returns during the life of
a project which is far from completion. Predictability and manageability
matter in this interim time period. 

Sometimes what looked it easy to do when you started, is discovered to
be more difficult as you get underway. As a result, your expenses are
higher than you expected and your profit lower.  Now a small change
probably won't change your investment decision but if expenses get much
more expensive your profits will become losses. At that point it is good
to "stop throwing good money after bad" and cancel the project.  Of
course, there is always some uncertainty about what the actual costs are
(a.k.a. risk). Predictability means that the margin of error and
variability of these actual costs are small.  By ensuring high
predictability you can avoid a risky project that
in and of itself threatens the ongoing survival of a) the organization, 
or b) your place in the organization.  So, for many organizations (and
managers in them) interested in stability, things that effect predictability
are important.  In fact, some companies and individuals are so risk averse
that they will knowledgably choose lower payoff investments just to ensure
that they don't lose to bad as well.  This newsgroup is frequently filled
with postings from individuals at various large companies that had an
early lead on a technology but knowledgably disinvested and left the
market open to more entrepreurial companies and individuals. Note that
the stock market also pays a premium (in P/E ratios) for predictability
in earnings, so it is not like individuals at these large companies seeking
predictability are making these low risk decisions foolishly--they may
be just appealing to their existing class of shareholders.

Manageability is also an interim (and typically personal) concern.  
Companies often make compensation plans on regular basis, and this
cycle may not mesh perfectly with a large project.  Since many people
don't want to have to wait 3-5 years (in the case of large projects)
in order to be considered for a raise, they need to be judged on 
interim results.  Managers can be evaluated on how predictable their
schedules have been (lower risk to the organization, greater assurance
that the investment will pay off as originally anticipated).  For the 
manager to be evaluated, the senior manager will need some data--they
will need to see that the manager can tell the current state of things
and tell if they are close to plan or not.  For this reason manageability
is a valuable interim goal as well.

The fact that one can build a product on schedule, within budget, as
spec'd and still have severe things happen in the end when it turns out
that customers won't buy that product is not lost on me.  This has 
happened to projects that I have managed with the resulting death of
the organization.  On the other hand, this death took three years, and
in the interim I was reviewed quarterly on predictability and 
managability of my projects.  Other managers who controlled their 
schedules less closely suffered the same *ultimate* fate as I, but
they also suffered more in quarterly reviews as well.  In the end,
I have the track record to show that I can well estimate and control
project costs and they cannot, and that helped me in finding a
comfortable  place in a new organization.

> |However, I'd say that the scope of most CASE tools and Software
> |Engineering methodologies in general place far too much emphasis
> |on the actual development of software to the shameful neglect of 
> |MANAGING the process.  Little or no attention is paid to deciding
> |which projects to pursue, or calculating the cost/benefits of 
> |pursuing/abandoning projects.  There is certainly not enough 
> |attention paid to TRAINING MANAGEMENT to make decisions that 
> |will promote the strategic goals of an organization.  

But actually, watching the predictability and managability of the
project IS a form of deciding which projects to pursue and the cost
benefits of pursuing/abandoning projects.  Think of it this way.
You've got an engine with tachometer on it.  8000 RPMs is expected
danger point.  Current forecasted RPMs for the trip is 4-6000 RPMs. You can
watch the absolute RPMs, or you can take the expected average (5000RPM) 
and watch the amount of variance from average.  As that difference
increases to about 3000 you know you have a problem.  Similarly, with
project planning.  You make your original investment decision based
upon plans and forecasts.  You have your expected schedule and costs,
and your expected revenues. You have a point at which if the estimates are 
off by this amount you risk a loss. If you don't have much variability in
these projections as you continue on the project, there is not much
need to re-evaluate your investment decision.  If projections change a lot
then you spend lots of management time trying to re-evaluate the
investment plans.  Senior managers like "predictability" because that
means they don't need to keep re-examining their already made investment
decisions. R&D managers who can keep schedules and budgets under control,
and Marketing managers who give good revenue forecasts are thus valued.

I do not disagree that many times people in the bottom of organizations
can do a good (often even better) job of such forecasting.

> Agreed.  But we better be willing to admit that the amount of time and
> money necessary to pursue these longer term projects is very unpredictable.
> Unpredictability need not be bad.  A basketball team can start out losing,
> lose keys players, have their prime draft choice turn into a flake, and still
> go on to win.  Its the final results that count.

Most definately true.  But unpredictability isn't necessarily a virtue
either.  When the team is losing, the owner may change the strategy and
fire the coach.  If the new coach turns around the team and it goes on
to win, that's great.  But the old coach may not be happy about losing
his job.  So to the extent that he can control things, he may adopt
low risk strategies aimed at protecting his job.  He may pass on drafting
a guy that occasionally sets records and but also gets in bad slumps.  He
may go instead for someone who is a consistant middle range scorer.
This may not be the optimal winning strategy, but if they lose, by 
only a few points instead of losing big they may save the coach's job for
a little while longer.   I think this sort of conservatism and self
interest explains a lot of the differences between certain companies
and others.  Many entrepreurial companies are populated by young managers 
who haven't achieved all they want yet.  They aren't afraid to risk what
they have, because they feel they can start over again.  Many more
conservative companies are dominated by managers who have achieved
most of their goals and who want to preserve their situations.  You have
the right to choose the kind of company you want to work for based on
your own needs, just as the managers have a right to direct the work in a
way that will protect their own needs.
  
Scott McGregor
Manager, Tool Integrations
Atherton Technology
mcgregor@atherton.com

(Formerly an R&D software project manager at Hewlett-Packard for many years)

cox@stpstn.UUCP (Brad Cox) (06/16/90)

>In article <1990Jun13.101122.15604@axion.bt.co.uk> pyoung@zaphod.axion.bt.co.uk writes:
>>From article <7486@fy.sei.cmu.edu>, by bwb@sei.cmu.edu (Bruce Benson):
>>> How much has bridge building changed since the first bridge was built? Sure
>>> the techniques and materials have changed dramatically, but a bridge of
>>> today would still be recognized by a builder of centuries past.  The same
>>> for buildings.  
>>This is not intended as a personal assault on Bruce Benson, but I
>
>Not to worry ;-).
>
>>can't help wondering if building bridges is an appropriate analogy for
>>what we are doing.

Its a beaufiful example, if we make one simple change. Instead of building
them from a well-understood, reproducible materials like wood and iron,
imagine that every bridge-builder felt free to *invent* his own materials
from first principles for each bridge.  Lemme see now, the Brooklyn Bridge
is kinda cute, but iron is kinda heavy, so I'll build the next one from
breadsticks instead. And the next one from spaghetti. And the next one from...

The paradigm shift that I've been calling the software industrial revolution
involves changing our reluctance to build up a robust marketplace in 
well-understood materials with reproducible properties (Stacks and Queues
and ScrollBars and CustomerObjects) and begin building from these instead
of reinventing everything from first principles.

This involves a shift in focus quite analogous to Copernican shift that put
the sun in the center of the universe rather than the earth. It involves
relinquishing our belief that programmers and their *tools* (languages) are
where we should be focusing, and instead focusing on what it will take to
create and maintain a robust software components marketplace.
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

bwb@sei.cmu.edu (Bruce Benson) (06/17/90)

In article <5212@stpstn.UUCP> you write:

>The paradigm shift that I've been calling the software industrial revolution
>involves changing our reluctance to build up a robust marketplace in 
>well-understood materials with reproducible properties (Stacks and Queues
>and ScrollBars and CustomerObjects) and begin building from these instead
>of reinventing everything from first principles.

AGREED! As a programmer/software engineer/computer scientist I should study
all these well understood concepts but I SHOULD NEVER EVER HAVE TO CODE 
another stack, queue, link list, search, sort, tree traversal, etc..  
Instead I should only have to parameterize (instantiate) a generic form to
get what I need.  Parameterization would include picking space/time trade offs.

>This involves a shift in focus quite analogous to Copernican shift that put
>the sun in the center of the universe rather than the earth. It involves
>relinquishing our belief that programmers and their *tools* (languages) are
>where we should be focusing, and instead focusing on what it will take to
>create and maintain a robust software components marketplace.

Libraries have always been popular but difficult to use due to the nature
of prepared subroutines.  Ada generics as well as classes in object oriented
languages would seem to be a magnitude jump forward in supporting the creation
of these types of software components. I would like to think this will happen
naturally as the productivity and quality benefits of these components sell
themselves. 

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8496    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

cox@stpstn.UUCP (Brad Cox) (06/28/90)

In article <CIMSHOP!DAVIDM.90Jun25103113@uunet.UU.NET> cimshop!davidm@uunet.UU.NET (David S. Masterson) writes:
<In article <5257@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:
<
<   I use a slide in which Mrs. Kahootie's plumber proposes to build a
<   plumbing system as we'd build software, by designing and building 
<   everything from first principles rather than by buying standard
<   components from the plumbing supply store.
<
<   Suppose she lacked the common sense to refuse. Imagine the difficulties
<   that *she* would face with precisely the questions that you've asked.
<
<I'm not sure I catch your meaning in this analogy.  My questions for you is:
<
<- Where'd the "standard components" come from in the first place?
<- What *standards* were they produced to?  With what proof?
<- Will the "standard components" work together?  Again, with what proof?
<
<I'd feel kind of sorry for Mrs. Kahootie if she expected things to be built
<right and trusted the plumber to do *the right thing*, but only made sure that
<"the price was right".  If she didn't bother to check on where the plumber got
<his training or whether or not he was using shoddy material, then she got what
<she paid for!
<
<Software development "components" are no where's near the level of stability
<and surety that plumbing tools are!  If I was expecting to make a living off
<of the system built with components, I'd want to know their expected
<capabilities and have some proof to back it up with.  I certainly hope that
<any software vendors looking to make a living off of "components" are thinking
<the same way!
<
<You'll note that there is nothing in the above that suggests that every system
<must be built from "first principles".  Even the reference to my article was
<meant to suggest that you can build up from smaller pieces, but you need to
<establish a "confidence level" in those smaller pieces.  The question is how?
<
<
<--
<===================================================================
<David Masterson					Consilium, Inc.
<uunet!cimshop!davidm				Mt. View, CA  94043
<===================================================================
<"If someone thinks they know what I said, then I didn't say it!"

Newsgroups: comp.software-eng
Subject: Re: CASE - The Emperor has no clothes on!
Summary: 
Expires: 
References: <37538@genrad.UUCP> <55137@microsoft.UUCP> <1990Jun13.070649.14929@caen.engin.umich.edu> <EACHUS.90Jun21193047@aries.linus.mitre.org> <CIMSHOP!DAVIDM.90Jun22154650@uunet.UU.NET> <5257@stpstn.UUCP> <CIMSHOP!DAVIDM.90Jun25103113@uunet.UU.NET>
Sender: 
Reply-To: cox@stpstn.UUCP (Brad Cox)
Followup-To: 
Distribution: comp
Organization: Stepstone
Keywords: 

In article <CIMSHOP!DAVIDM.90Jun25103113@uunet.UU.NET> cimshop!davidm@uunet.UU.NET (David S. Masterson) writes:
<In article <5257@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes:
<
<   I use a slide in which Mrs. Kahootie's plumber proposes to build a
<   plumbing system as we'd build software, by designing and building 
<   everything from first principles rather than by buying standard
<   components from the plumbing supply store.
<
<   Suppose she lacked the common sense to refuse. Imagine the difficulties
<   that *she* would face with precisely the questions that you've asked.
<
<I'm not sure I catch your meaning in this analogy.  My questions for you is:
<
<- Where'd the "standard components" come from in the first place?

From the next-lower echelon of the software components marketplace; i.e.
vendors of fine-granularity software components such as Stepstone (Objective-C),
Digitalk and ParcPlace systems (Smalltalk).

In the case of Stepstone, we build on the work of even-lower vendors, such
as Dec, Sun, etc for the operating system, kernel window systems (X Windows,
Presentation Manager, etc).

<- What *standards* were they produced to?  With what proof?
<- Will the "standard components" work together?  Again, with what proof?

See the concluding section of my IEEE Software paper, "Planning the Software
Industrial Revolution", particularly the discussion of specification/testing
languages and their use within Stepstone.
-- 

Brad Cox; cox@stepstone.com; CI$ 71230,647; 203 426 1875
The Stepstone Corporation; 75 Glen Road; Sandy Hook CT 06482

scotth@boulder.Colorado.EDU (Scott Henninger) (07/06/90)

In Article 3980 pyoung@axion.bt.co.uk (Pete Young) writes:

> In that sense all materials are well-understood. Properties of new
> materials can easily be calculated and verified by testing. Moreover,
> an engineer is able to rely on the fact that any given piece of
> material will have mechanical properties between certain limits.
> This arises from having a quality control system to produce consistent
> materials. 

This is precisely where the bridge building analogy breaks down.  The
materials used are *physical* entities that obey the *static* laws of
physics.  Software more often than not deals with *conceptual* entities
that evolve and change *dynamically*.  The physical constraints on
software are few and far between, and where they do exist (say in the
form of programming languages), they're subject to change (new
programming languages, new language features).

A more useful analogy is American law.  In this domain the constraints
are conceptual and subject to changes, with few exceptions.  Murder will
always be illegal, but each case is different and must be scrutinized
carefully to determine how the situation should be handled.

After 200 years of practice, we still have not been able to engineer a
perfect (or close approximation thereof) legal system.  The question is
whether we can expect better for software "engineering"?


-- Scott
   scotth@boulder.colorado.edu