[comp.software-eng] Retrospective Forecasting

duncan@dduck.ctt.bellcore.com (Scott Duncan) (02/28/90)

In article <5473@bgsu-stu.UUCP> klopfens@bgsu-stu.UUCP (Bruce Klopfenstein) writes:
>I am very interested in the discussion going on about forecasting.
	[In comp.society.futures]
>I have studied technological forecasting, and what surprises me is
>the seeming lack of research into past forecasts.

>                                                                What
>better way to learn about forecasting than to re-analyze past forecasts 
>and look, for example, for systematic errors in the process.

>                                                         Little is
>learned from past mistakes.  I'm curious what you forecasting theorists
>and practitioners think about that.


>Dr. Bruce C. Klopfenstein      |  klopfens@andy.bgsu.edu
>Radio-TV-Film Department       |  klopfenstein@bgsuopie.bitnet
>Bowling Green $tate University |  klopfens@bgsuvax.UUCP
>Bowling Green, OH  43403       |  (419) 372-2138; 352-4818
>                               |  fax (419) 372-2300

It struck me that this applies equally well to software projects.  In discus-
sing software quality and productivity efforts with a number of people over the
past couple of years, this has been one of the major failings noted.  There is
very little effort put into retrospective analysis of what happened in a pro-
ject.

My sense is that some large projects (especially those for the military) may
have this done to them and some very small projects, where the individual de-
termination of a few people prevails, do this.  However, the 100K - 5M range
(lines of code) does not seem to attract much of this.

I have heard many reasons given for this, but the ones repeated most often are:

	o by the time a project is done, folks are already committed to
	 the next project (or, at least, the next release of the one they
	 just completed), i.e., there is no time alloted in the original
	 (or following) schedule for such activity;

	o people keep very poor records of what went on in the project,
	 e.g., history, documentation of decision-making, etc., so there
	 is little data to operate on other than people's memory of what
	 happened, i.e., people won't trust or find worth in the effort
	 with such "unreliable data";

	o the impression is that this effort is to find out what went wrong
	 (rather than what worked well and then deciding how to repeat those
	 successes) and folks involved don't want to spend time on an activity
	 that can only make them look bad;

	o it takes a lot of time and human effoprt, i.e., it costs a lot,
	 and nobody's risked doing it enough (and written about it) to show
	 real benefit in subsequent projects from having done retrospectives;

	o the original teams usually move on to other projects so, even if
	 you uncovered things of interest, the next project will have a dif-
	 ferent staff and is likely, because of individual differences, to
	 display a different profile of behavior.

While there is truth in all of them, I think the negative aspects of that
"truth" can be overcome.  In particular, if such retrospective activity and
project history is a pre-planned part of the project, I think much of the prob-
lem goes away.  What is a problem is justifying, in the short term, the cost of
such effort since the long-term benefit is generally where you'd expect to see
the cost recouped.

On the other hand, my personal belief is that the very next project would prob-
ably benefit from such an effort.  I look at it as a parallel to code inspec-
tions (also labor intensive but effective):  there's an up front cost, but peo-
ple begin to see the back-end benefit rather quickly once they actually get in-
to doing it.

I'd be interrested in people's comments about these points.

Speaking only for myself, of course, I am...
Scott P. Duncan (duncan@ctt.bellcore.com OR ...!bellcore!ctt!duncan)
                (Bellcore, 444 Hoes Lane  RRC 1H-210, Piscataway, NJ  08854)
                (201-699-3910 (w)   609-737-2945 (h))

smith@iuvax.cs.indiana.edu (John W. Smith) (02/28/90)

	I think there is tremendous value in learning from the past.
I have for many years trying to understand why we in this business
consistently underestimate project times by hundreds of percent.  The
conclusion I come to is that we are for the most part completely blind
to the past.  There are two specific mistakes that I've seen
repeatedly.  
	We assume that THIS project will work as it is supposed to.  
There are a few projects where this actually happens, just enough to
sustain the belief in the possibility.  But 19 out of 20 get screwed
up in some way or another.  We may recognized that the last project
was delayed because the vendor didn't deliver the connectors on time.
So for the current project we order well in advance.  But then the
connectors get lost in the mail, and nobody followed up, so THIS
project is delayed.  If you consciously and deliberately review your
projects over a number of years, it is very obvious that the normal
mode is for things NOT to work as intended.
	The second mistake we make is to remember the time it took to
FIX the problem, but not the time it took to FIND it.  It takes half
an hour to change the number and recompile the program.  That's what
your staff will remember.  They DON'T remember the two weeks it took
to figure out that it was that particular number which was causing the
problem.
	It could be that the kind of people who are attracted to a
fast-moving and ever-changing environment such as ours are evolutionarily 
selected to NOT have a historical perspective.  But if we're trying to
manage, hence control, our environment, it is a very valuable tool
that we could make much better use of.  Although the examples I've
given pertain specifically to management rather than forecasting, the
basic ideas are no less true.

mcilree@mrsvr.UUCP (Robert McIlree) (03/01/90)

From article <37392@iuvax.cs.indiana.edu>, by smith@iuvax.cs.indiana.edu (John W. Smith):
> 	The second mistake we make is to remember the time it took to
> FIX the problem, but not the time it took to FIND it.  It takes half
> an hour to change the number and recompile the program.  That's what
> your staff will remember.  They DON'T remember the two weeks it took
> to figure out that it was that particular number which was causing the
> problem.

If you have an established design methodology, and adhere to it closely,
you could reduce the time spent in finding the problem by reviewing
where the defect occurred in the process. I use the term defect here
as opposed to bugs because a substantial number of problems with
systems occur in the requirements, architecture, and functional
design phases. The implementation of defects in code (bugs) are the
manifestation. Serious defects propagate throughout the entire
design cycle, such that it's no longer an issue of fixing bugs in code.
Rather, the issue centers around should the system have been designed
and built in the first place, complete reorientation of the project,
etc.

My main point in this regard is, assuming proper systems engineering
and structured design techniques, the concentration of effort on
fixing bugs in code is moot. The effort should focus on the design
or requirements defects that spawned the problem in the first place.

> 	It could be that the kind of people who are attracted to a
> fast-moving and ever-changing environment such as ours are evolutionarily 
> selected to NOT have a historical perspective.  But if we're trying to
> manage, hence control, our environment, it is a very valuable tool
> that we could make much better use of.  Although the examples I've
> given pertain specifically to management rather than forecasting, the
> basic ideas are no less true.

Management and forecasting aren't mutually exclusive. In order to
properly forecast, you need historical data. To get that data, it
must be collected, crunched, and represented in some form that
non-technical types can understand and make use of. To insure that
you get the data and act upon the outcomes, you must manage that
process.

I agree wholehartedly that a historical perspective helps greatly
in controlling quality, and fine-tuning of development processes.
It's a sizeable effort to set this up, and the pay-offs are not
short-term, but in places where I've seen it practiced, well worth
the resources and effort long-term.


Bob McIlree

djones@megatest.UUCP (Dave Jones) (03/01/90)

From article <37392@iuvax.cs.indiana.edu>, by smith@iuvax.cs.indiana.edu (John W. Smith):
...
> I have for many years trying to understand why we in this business
> consistently underestimate project times by hundreds of percent.  The
> conclusion I come to is that we are for the most part completely blind
> to the past. ...

I have come to the conclusion that we usually underestimate for the same
reason that the young suitor tells his girlfriend that she can't get pregnant
the first time they do it... and often with the same result.

matt@bacchus.esa.oz (Matthew Atterbury) (03/02/90)

In article <2262@mrsvr.UUCP> mcilree@mrsvr.UUCP (Robert McIlree) writes:
>From article <37392@iuvax.cs.indiana.edu>, by smith@iuvax.cs.indiana.edu (John W. Smith):
>> 	The second mistake we make is to remember the time it took to
>> FIX the problem, but not the time it took to FIND it.  [...]
>>                           :
>
>If you have an established design methodology, and adhere to it closely,
>you could reduce the time spent in finding the problem by reviewing
>where the defect occurred in the process.   [...]
>                            :
>Bob McIlree

Yeah Team!
            IMHO the most useful/important reason/effect of a good design
[using an established (and understood) design *method*] is that the coder,
when confronted with a defect, can often say:

    Hmmm, this is being done wrongly ...
          this can ONLY be done in this module ...
          this can ONLY be done by these [few] routines ...
          this particular defect sounds like this routine ...
          bingo! that must be it!

    All this WITHOUT looking at any code.

Obviously, this requires a very good knowledge of the design and a
good understanding of the code [less important since often the defect
is a design error] - a luxury not every one can have.
-- 
-------------------------------------------------------------------------------
Matt Atterbury [matt@bacchus.esa.oz.au]   Expert Solutions Australia, Melbourne
UUCP: ...!uunet!munnari!matt@bacchus.esa.oz.au            "klaatu barada nikto"
ARPA: matt%bacchus.esa.oz.AU@uunet.UU.NET  "life? don't talk to me about life!"

reggie@dinsdale.nm.paradyne.com (George W. Leach) (03/02/90)

In article <20406@bellcore.bellcore.com> duncan@ctt.bellcore.com (Scott Duncan) writes:

>It struck me that this applies equally well to software projects.  In discus-
>sing software quality and productivity efforts with a number of people over the
>past couple of years, this has been one of the major failings noted.  There is
>very little effort put into retrospective analysis of what happened in a pro-
>ject.

[discussion of reasons for not doing this and the fact that the intial 
investment can be recouped down the road - deleted]


    For over ten years now I have been asking myself why don't we invest 
more time up front into software products.  Why must a new software system
be given a ridiculous due date that strains the programming staff to just
get it built without worrying about all the details that would make life
easier in the future?

    There are a couple of non-technical factors involved here that hing
on short term versus long term thinking as well as the ever present
"market window".

    First, and most importantly, a product must appear on the market by
a certain time in order for the company to compete in the marketplace.
In the past, when I have worked for a regulated entity, eg. Bellcore or
pre-divestiture AT&T, this had not been as great a concern as with a
market driven company.  However, getting a product out the door would
impact the BOCs ability to do business.

    Certain companies seem to take the attitude that it is better to
get the product out there sooner, than to wait and produce a higher
quality product.  Being first to market or hot on the heals of your
competition seems to be a motivating factor.  Other companies, who
have built up reputations as a quality producer can aford to release
products after their competition because their customers expect a
better product and are willing to wait.  Witness the attitudes towards
the announcement of DEC's risc workstation last year.  Not to take away
from DEC, but I heard many folks simply say that they would wait to see
what Sun would produce.

    However, on the other hand, it seems whenever we spec out a software
system it must always be the most feature-full wiz bang thing the world
has ever seen.  We set too high of a goal to complete in a relatively
short period of time.  I saw a statement that I think Harlan Mills was
credited as expressing: "We don't build software, we grow it".  This to
me makes a great deal of sense.  Use a phase release system to grow a 
system in a controlled manner.

    Anyway, to get back to the topic at hand......deadlines are out of our
control and often, due to market pressures, only focus on the end product,
not on the internal quality of the product in terms of maintainability,
extensibility, readability, etc.....  When that is the ultimate goal,
nothing else will matter to management.  Consequently the programmers are
somewhat contrained in what they can do in order to meet that goal *and*
at the same time slip in as many quality features as they can along the
way.

    Until product management makes all these long term investments a part
of the product and allocates time and resources for it, it will not happen.


    While there are people who left to their own devices would never write
readable code, I think that most professionals would.  Most do despite the
fact that it is not part of their assignment.  Remember, when schedules
slip the first thing to go is code reviews, etc.... then features are
thrown out.  This sort of says it all, doesn't it?

George

George W. Leach					AT&T Paradyne 
(uunet|att)!pdn!reggie				Mail stop LG-133
Phone: 1-813-530-2376				P.O. Box 2826
FAX: 1-813-530-8224				Largo, FL 34649-2826 USA