[comp.sw.components] Ada 9X objectives and long development cycles

rcd@ico.ISC.COM (Dick Dunn) (10/12/89)

jcardow@blackbird.afit.af.mil (James E. Cardow) writes:

James Cardow wrote:
> >> ...Consider the ten to twenty year development cycle for large projects...

I squawked:
> >If you have a ten-year development cycle for a software project, you're
> >going to be producing obsolete software!  You can't help it...

I realize that my comment was just an objection--not a suggestion for how
to fix the problem I perceived.  However, Cardow seems interested in the
subject; I'm vitally interested; I'd like to bounce some ideas around...
so perhaps it's worth pursuing.  (Would this be better moved to
comp.software-eng?  I don't know how much Ada relevance it will retain.)

> I must admit that my comments were made with only my own experience in mine, 
> that being large DOD sponsored projects that had developments spanning two
> to three computer generations.  However, that is the primary Ada environment.

Perhaps, then, there's a fundamental question of whether Ada can remain
suitably stable for these very-long-schedule projects, yet acquire some
ability to adapt more quickly.  At first blush, that seems like it might
be difficult.  However, it's important to consider it, because if Ada can't
adapt faster and be more responsive than it has, there is a chance that
large DoD projects will be its *only* real domain, denying it success in
the commercial world where you've got to move faster.

(I spend my time in the commercial world and have done so for quite a
while; my only real encounter with the DoD world was a brief but horrible
skirmish with a Minuteman-related project many years ago.)

> Consider the problem in a real context.  System software in the +100,000 
> lines of code, with supporting software at a 4:1 ratio...

Yes, ouch, although commercial operating systems are in that range.  Once
you've got such a system, you've got all the problems that go along with
it.  But can you aim to avoid producing systems of that size in future
projects?  How big do the systems *really* need to be?

One thing I've noted again and again as I look at complex software and
large software projects is that predictions that "this is gonna be a big
un!" are self-fulfilling.  Let me see if I can illustrate.

There's a tendency for the perceived amount of effort required for a
project to be "bimodal" in a funny way.  That is, if you look at the number
of people on a project versus the perception of the project staff of
whether it's under-staffed or over-staffed, you are very likely to see
something like the following:
	- very few people: perception is "we need more"
	- just about right: perceptions are mixed as short-term needs vary
	- too many (past first node): "too many people; they're getting in
	  the way and getting bored"
	- a whole bunch too many past first node: "not enough people"!
	  This is the interesting point--it's where you get so many people
	  that you need to ADD people to manage, coordinate, communicate,
	  etc.  You're so overstaffed that you've got to add people to
	  cope with the excess staff so that work can get done.  Excess
	  staff is a particularly severe problem in early project phases.
	- just about right again (at second node):  "ok, but this is sure
	  a big project"  You've got enough people to handle the project
	  AND all the extra people.

Projects could be multi-modal (more than two nodes) but it's hard to
imagine covering that much range in staffing without getting a reality
check.  Two examples of where I believe I've seen this bimodal-staffing
phenomenon were something like 4 or 5 people versus about 30, and perhaps
10-15 versus 600-800!  The differences are radical--they have to be to get
a qualitative difference between the nodes.

The first point about this is that if you get really wound up for a big
project, instead of making every serious attempt to simplify it to the
bone, you'll staff up for a big project.  You'll pass the first node at a
full gallop and enter the region where you're (seemingly) understaffed.

Now think about what happens if you get to the larger size:  You *must*
produce a large piece of software.  There *will* be more difficulties in
communication among people (and therefore among modules).  So you have to
impose a lot more structure.  It's harder to assess the effects of changes,
so it's harder to make them.  If one function (or ADT implementation or
whatever) isn't quite what you'd like, it may involve too many people and
too much hassle to change it, so the code that uses it gets a little more
complicated and bigger to work around the mismatch.  If it's obviously
wrong, it'll get changed...I'm talking about subtler problems.  But the
phenomenon feeds on itself:  As one piece of code grows to adapt to a
mismatch, it itself becomes harder to change.  The software "sets up" too
soon.  You try to avoid this, of course, because you can see some of it
coming.  So you spend more work on the front-end phases--detailed design,
massive specification, all conceivable attempts to pull things together.
It helps a little, but it also speeds the ossification process.  What
you're doing is making sure that when things set up, they end up about
where they're supposed to be.  But what you really need is to keep them
from setting up so soon.

Some of you will no doubt think I'm crazy, or hopelessly pessimistic (or
both!:-)  You probably have trouble grasping it until you've worked on a
project whose printed specifications comprise a stack of paper taller than
you are.

If you can keep the software fluid and able to change throughout the
development process, you have the side benefit that the software has been
going through changes.  Adaptability is already an idea for the software.
It's in people's minds.  Some of the un-adaptability will get fixed before
the software goes out the door the first time.  (You're sanding off the
rough edges that occur in *any* new product.)

Another problem of the overstaffing/overestimating explained above is that
it becomes harder for any one person to get a comprehensive view, or even a
substantial view.  This feeds into both the difficulty of change and the
difficulty of perceiving the need for change.

Following from that to a different observation - The best software is
constructed by the smallest possible number of best possible individuals.
Each person has greater responsibility and more global view of the project.
Instead of having, perhaps, NObody who can see the whole picture, you might
have two or three who can see it, and who can act as cross-checks on one
another.

There are still projects which are large enough that they need a small army
to implement them...and honestly, I don't have a lot of ideas there.  But I
do know that the number of such massive projects is much smaller than is
commonly believed.  I also know that once a project gets away from you and
starts to grow, you have real trouble taming it again.
-- 
Dick Dunn     rcd@ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...No DOS.  UNIX.