[comp.software-eng] Soft-Eng Digest V4 #8

MDAY@XX.LCS.MIT.EDU (Moderator, Mark S. Day) (02/22/88)

Soft-Eng Digest             Sun, 21 Jan 88       Volume 4 : Issue  8

Today's Topics:
             "Win a Supercomputer": Typo in Phone Number 
             Coordinating Software Development  (3 msgs)
                     Correctness Proofs (2 msgs)
                Cost Estimation Models for Maintenance
              Details of Bank's Costly Computer Foul-Up
----------------------------------------------------------------------

Date: Thu, 11 Feb 88 12:44:57 PST
From: Eugene Miya <eugene@ames-nas.arpa>
Subject: "Win a Supercomputer": Typo in Phone Number 

>From: cgs@umd5.umd.edu  (Chris Sylvain)
>Subject: Contest for an ETA10 Model P
>
>[ This is from the Jan. 7 issue of _Electronic Design_, in the section
>  entitled "Design Alert" ]
>
>	The Promise of Supercomputers Lures Future Designers
> [[ Article edited ]]
>For more information, call Marcia Shilling at ETA at (612) 853-6538.       

This is a typo: try:
Marcia Shilling at ETA at (612) 853-6438.

------------------------------

Date: Tue, 9 Feb 88 11:49:51 CST
From: "Scott E. Preece" <preece%fang@gswd-vms.gould.com>
Subject: Coordinating Software Development 

   hollombe@husc6.har:
> Better yet, call it "unnecessary".  I've run into this situation in
> several shops where people simply didn't know better (not that there was
> any excuse for that).  The solution is a configuration management system
> wherein only one person can work on a module at a time.  In order to
> work on the module they have to check it out of the system in such a way
> that no one else can check it out until it's been checked in again.

This assumes you can lock the component for the whole time people are
working on it.  We often have cases where a component is in development
for weeks or months; we cannot lock maintenance out of the component for
the whole time development is going on.  It is also common for separate
component development efforts to intersect at specific modules or
include files; it is unrealistic to talk about locking those for long
periods of time.

There is no way to avoid reconciliation in some cases; what we would
like are better tools for handling the reconciliation when it is
necessary.  In general those are the same tools that would make many
other aspects of development easier and more productive.


scott preece
gould/csd - urbana
uucp:	ihnp4!uiucdcs!ccvaxa!preece
arpa:	preece@Gould.com

------------------------------

Date: Wed, 10 Feb 88 13:33:45 -0800
From: Ira Baxter <baxter@madeleine.UCI.EDU>
Subject: Coordinating Software Development

A recent issue of "soft-eng" raised the question of how one can
integrate different development branches of a software product.  I
recently ran into a paper on the subject, and heard the principal
author give a short talk on it.  Since I have yet to actually read
it, I cannot pass any judgement on its qualities, but the authors have
written other papers I think are competent.

``Integrating non-interfering
version of programs'', Susan Horwitz, Jan Prinz, and Tom Reps,
University of Wisconsin Tech Report #690, March, 1987.

IDB
(714) 856-6693  ICS Dept/ UC Irvine, Irvine CA 92717

------------------------------

Date: Thu, 11 Feb 88 08:52:39 CST
From: Ralph Johnson <johnson@p.cs.uiuc.edu>
Subject: Coordinating Software Development

Anyone interested in the problem of merging versions of programs
should see the paper in the latest Principle of Programming Languages
conference by Horowitz, Prins, and Reps, called "Integrating
Non-Interfering Versions of Programs".  They had another paper at
POPL on the theory behind their technique.  They use dependence
graphs to represent programs.  Their algorithm takes a base version
of a program and two later versions of it, and will either show
how merging the two versions will cause them to interfere with each
other, i.e. will introduce bugs, or will merge them into a new
version, which is guarenteed not to interfere.  The algorithm is
conservative, so it may sometimes complain about versions which will
not actually interfere, but it is sophisticated enough that it seems
to me that this will rarely happen.

------------------------------

Date: Tue, 09 Feb 88 17:26:21 PST
From: PAAAAAR%CALSTATE.BITNET@MITVMA.MIT.EDU
Subject: Correctness Proofs

mtune!lzaz!lznv!psc@rutgers.edu  (Paul S. R. Chisholm) writes
> UH2@PSUVM.BITNET (Lee Sailer) writes:
>> PROGRAM CORRECTNESS PROOFS:  All the work on this seems to require the
>> addition of "assertions" to the code, that is, logical statements of
>> all the assumptions that must hold before, after, and even during the
>> execution of a piece of program.  (See Gries, the Science of Programming)
>...
>It's quite impossible to prove most software "correct", either because
>it's not correct (no smiley), or[...it takes too long]

I concur with this judgement - most software has bugs and this is no
smiling matter.

The situation is worse because it has been shown that the Hoare Logic
technique is incomplete (the paper was in the Journal of the ACM in about 1978
but the citation escapes me).

>...
>In my personal experience, it's impractically difficult to "prove" any
>non-trivial program correct.  On the other hand, the skills you need
>for correctness proofs are very useful.

In the late '60's I proved a 'nontrivial'graphics system. The
hardware was three months late in delivery. I had ample time to
prove the machine code(!) I had written. I found three bugs.
Even so it failed when we tested it
 - because of a hardware bug & the engineer has never talked to me since then.

Brad Sherman Points out that proving the prover will take a long time.
Perhaps rather than program proving we should be developing Program Refutation
Programs.  This should be easy using Prolog or a Natural deduction approach.

Perhaps we need a method that developes a correct structure from a specification
that satisfied my requirements. After 20 years later there is no
universally accepted method which derives a structure from a specification.

Finally Proofs of Programs are limitted to verifying that a given piece of code
behaves according to a specificatio. There is *no* guarantee that the
specification is for a useful software.
The Proceedings of The International Workshop on Software Specification
and Design (IEEE Comp Soc Press) shows many approaches that are being
tried in the 1980's to getting specifications that are 'good'.

Dick Botting
PAAAAAR@CCS.CSUSCC.CALSTATE(doc-dick)
paaaaar@calstate.bitnet
PAAAAAR%CALSTATE.BITNET@CUNYVM.CUNY.EDU
Dept Comp Sci., CSUSB, 5500 State Univ Pkway, San Bernardino CA 92407
voice:714-887-7368           modem:714-887-7365 -- Silicon Mountain

------------------------------

Date: Thu, 11 Feb 88 12:44:57 PST
From: Eugene Miya <eugene@ames-nas.arpa>
Subject: Correctness Proofs

>Data flow and functional approaches to parallelism produce programs
>that are much easier to discuss in a theoretic framework.  I suggest
>you look at
>
>  J. Backus, Can Programming be Liberated from the von Neumann Style?
>  A Functional Style and Its Algebra of Programs, Communications of the
>  ACM, 21(8):613-641 (August 1978).

I just spoke to Backus on the phone (a really neat speaker, I recommend the
IBM video tapes of him [Contact Bruce Shriver at IBM TJ Watson]).
This paper is getting dated in some ways.  FP is giving way to FL.
A new report from IBM Almaden will be available in April.

Second the good recommendation:
>Susan Owicki, David Gries, _Verifying Properties of Parallel Programs: An
>Axiomatic Approach_, Communications of the ACM, Volume 19, Number 5, May 1976.

------------------------------

Date: Wed, 10 Feb 88 19:03:18 est
From: ddrg@skl-crc.arpa (Duncan Glendinning)
Subject: Cost Estimation Models for Maintenance

A company associate of mine asked that I post this message to the net.
Any reponses will be directed back to him.

 ----------

	Are there any people actually using commercially available
cost estimation tools to estimate the cost of operations and support,
or maintenance, of large software projects - especially in the
applications area of embedded or real-time systems?  My specific area
of interest is for Naval shipboard embedded systems, but I am
interested in anyone using tools for maintenence cost estimation for
the entire life cycle.

	It seems that most cost estimation models are designed to
estimate the costs of software development only.  A few models do
provide cost estimates for maintenance, but only as an over-sight -
COCOMO and SLIM (I believe) do not estimate maintenence costs for the
entire life cycle adequately, they were designed for development
analysis mainly, they deal with the factors which affect developer
productivity mainly and inadequately address the factors affecting
maintenance productivity.  I beleive the factors affecting the
productivity of maintenance programmers and analysts are different
than those affecting their productivity in development.  Factors such
as: the understandability, complexity, reliability, and structure of
the code; the availability and quality of the delivered requirements,
design, and test documentation; the standards and procedures used to
design, develop and test the code; the use of quality assurance and
configuration management; the availability and quality of delivered
support tools and documentation.

	Maintenance is DIFFERENT from development! 

	Maintenance has been ignored too long!  Development is the
process of producinga product which meets its spec and is done in a
cost effective and time effective way.  Maintenance is dealing with a
delivered product to maintain its usefulness.  The processes involved
and the factors involved are different.  Thus, a different cost
estimation model approach and algorithms are needed.

	Are there any good cost estimation models out there which are
designed with maintenance in mind?  PRICE SL looks good, but I am not
aware of any other models adequate for maintenance.  Has anyone used
PRICE SL?

	Are SOFTCOST or JENSON useful for maintenance cost
estimations?
 -------------------------------------------------------------------------------
Duncan Glendinning                         arpa: ddrg@skl-crc.arpa
Software Kinetics Limited                 voice: 613-831-0888
65 Iber Road, P.O. Box 680                  fax: 613-831-1836
Stittsville, Ontario  K0A 3G0

------------------------------

Date: 10 Feb 88 08:56:25 PST (Wednesday)
From: Rodney Hoffman <Hoffman.es@Xerox.COM>
Subject: Details of Bank's Costly Computer Foul-Up

For those of you who read RISKS DIGEST (FORUM ON RISKS TO THE PUBLIC IN COMPUTER
SYSTEMS; write RISKS-Request@CSL.SRI.COM for more information), this is a
follow-up to stories in RISKS-5.16 (25 July 1987) and again in RISKS-6.16 (27
January 1988), in which I related news accounts of Bank of America's failed
attempt at an ambitious new trust accounting and reporting system.  Here are
more details of interest to the software engineering community.

The Los Angeles Times for Sunday, February 7, 1988, carried a lengthy front-page
review of the entire debacle, "B OF A'S PLANS FOR COMPUTER DON'T ADD UP" by
Douglas Frantz.  The article includes lots of background history and economics.
Here are a few edited excerpts giving more details than the previous accounts:

  Last month, Bank of America acknowledged that it was abandoning the $20 
  million computer system after wasting another $60 million trying to make 
  it work.  The bank will no longer handle processing for its trust division, 
  and the biggest accounts were given to a Boston bank.  Top executives 
  have lost their jobs already and an undisclosed number of layoffs are in
  the works.
  
  ...The total abandonment of a computer system after five years of develop-
  ment and nearly a year of false starts raises questions about the bank's
  ability to overcome its technological inadequacy in an era when money is
  often nothing more than a blip on a computer screen....   
  
  In 1981, the bank had fallen far behind in the computer race.  Then-new
  chairman Armacost launched a $4-billion spending program to push B of A 
  back to the technological forefront.  The phrase he liked was "leap-
  frogging into the 1990s," and one area that he chose to emphasize was 
  the trust deparment.... 
  
  The bank was mired in a 1960s-vintage accounting and reporting system.  
  An effort to update the system ended in a $6-million failure in 1981 
  after the company's computer engineers worked for more than a year with-
  out developing a usable system.....  
  
  In the fall of 1982, bank officers met Steven M. Katz, a pioneer in creat-
  ing software for bank trust departments.... In 1980, he had left SEI Corp. 
  in a dispute and founded rival Premier Systems.
    
  Katz insisted on using Prime instead of B of A's IBM computers.  He boasted 
  that he could put together a system by 1983.  Within six months, a B of A - 
  led consortium of banks agreed to advance Premier money to develop a new, 
  cutting-edge system for trust reporting and accounting.  Nearly a year was 
  spent on additional research....  The go-ahead to fund to project came in 
  March, 1984.  While it was not a deadline, the goal was to have the new  
  system in operation by Dec. 31, 1984.
  
  What followed was a textbook structure for designing a computer system.  A
  committee was formed of representatives from each B of A department that
  would use the system and they met monthly to discuss their requirements.  
  DP staff gathered for a week each month to review progress and discuss 
  their needs with the Premier designers.  Some of the DP experts found Katz
  difficult to deal with occasionally, especially when they offered views on
  technical aspects of the project.  "Don't give us the solutions.  Just tell
  us the problems," Katz often said.
  
  When the ambitious Dec. 31, 1984, goal was passed without a system, no one
  was concerned.  There was progress, and those involved were excited about 
  the unfolding system and undaunted by the size of the task.  B of A devoted
  20 man-years to testing the software system and its 3.5 million lines of
  code; 13,000 hours of training, including rigorous testing, were provided
  to the staff that would run the system....
  
  In spring 1986, the system was about ready.  Some smaller parts were already
  working smoothly.  Test runs had not been perfect, but the technicians
  thought most bugs could be worked out soon.  A demonstration run had been
  successful....
  
  Many employees were operating both systems, working double shifts and
  weekends.  Late in 1986, an anonymous letter warned against a "rush to
  convert" to the new system and told the manager, not a computer expert, 
  that people had "pulled the wool" over his eyes.  The executive assured 
  the staff that there would be no conversion before it was time.  By then,
  lines of authority had also changed, making cooperation difficult.
  
  By early 1987, tests had been running with only a few bugs.  "There were
  still bugs, but the users felt they could run with it and work out the
  bugs as we went along," one former executive said.  A conversion date was
  set:  March 2, 1987.  
  
  Just then, half the DP staff was pulled off the assignment.  The conversion
  lasted one week.  On March 7, the first of the 24 disk drive units on the 
  Prime computers blew up, causing the loss of a portion of the database.  It
  was past midnight each night before workers retrieving data from a backup
  unit left the offices.  Over the next month, at least 14 more of the disk 
  drives blew up.  None had malfunctioned in the previous months of test.  
  
  It turned out that the units were part of a faulty batch manufactured by
  Control Data Corp.  But by the time the cause was discovered, delays had
  mounted and other difficulties had arisen.  Taken individually, none would
  have caused the ensuing disaster.  Together, they doomed the system.  
  
  At the same time, the bank decided to move the main staff 30 miles away.  
  Key people quit and morale sank.  Another section of staff was told they
  would be moving from Los Angeles to San Francisco, with many losing their
  jobs.  [Conflicts, turf battles, consulting firms, temporary employees]
  
  The bank's first public acknowledgement of the problems came in July 1987.
  [See RISKS-5.16]  An in-house investigation was viewed by many staff mem-
  bers as a witch hunt.  The bank announced further costs and then the trans-
  fer of the accounts in January 1988.  [See RISKS-6.16]
  
  The bank's one-time head of the program, since resigned, says, "A lot of
  people lay down on the floor and spilled blood over this system, and why
  they abandoned it now I cannot understand.  A guy called me this morning 
  out of the blue and said that 95% of it was working very well."

------------------------------

End of Soft-Eng Digest
******************************

-------