[comp.society.futures] Large Scale Computer System Design

joeisuzu@pawl.rpi.edu (Anthony Sute) (11/13/90)

I'd like to bring up the topic of large-scale software design and what the
general consensus is regarding its ethical implications and how the tech-
niques currently used might be changed in order to improve reliability for
life and mission-critical systems.  I feel that changes in large-scale
system design are warranted and standards should be created in order to provide
more reliable systems.  Extensive beta testing must be performed and standards
should be developed for this phase as well.  Any comments on this topic are
gladly welcomed... 

pcg@spl27.spl.fac.com (Paul C George) (11/17/90)

Let mer first note that I discuss the government contracting environment, which I suggest creates most of the large scale systems. The commercial world is under
a different set of constraints, but with similar results

I would suggest that the first requirement would be to allow proposals for 
systems to allow sufficient time and resources to in fact design the systems.
Requirements are often vague in order to maximize the room for 'interpretation'.
I think in practice systems are 'designed' during the coding activity. In essence
what you get is whatever we have time to build. Current pricing puts a premium
on delivering only the minimum that a specification requires; if the system won't
in fact work it is to the contractor's advantage. He has no liability and will
 probably be paid for 'enhancements'.

There is a similar problem with testing. There is insufficient time allocated to
 it, and most of the effort serves to uncover what has in fact been built by a
 large number of of engineers in their cubicals with a worms eye view, analogous
 to debugging using compiler runs. Testing is often designed to demonstrate that
 the system works to spec, not to find  problems. I have often seen qualification tests where if the sequence of operator actions was changed, the software would fail. Again it is part of the game to declare th project a success.

The fundamental problem is that in order to win contracts the proposal team must
lie as to how long the project will take. Then the design reviews are required to
 take place long before the problem can even be sufficiently understood. The net 
result is that the designs are often an exercise in handwaving, smoke, and
 mirrors. This is aggravated by reviewers without the technical competance to truly evaluate the design. We first need some honesty in bidding and awards to
allow systems to be designed and built in a professional manner. Currently our industry is in the business of winning and completing contracts, not developing
software systems. We ust assure that schedules are in fact reasonable to
 accomplish the job

The final problem (perhaps more to the point of your posting) is that large
 systems are basically beyond human comprehension. By the time a project is large enough to require 3 levels of organization (team, group, project) probably noone
has the bandwidth to understand the design at a sufficient level of detail to 
recognize inconsistancies or problems. A partial solution is CASE tools which
(potentially) can check a design for at least interface consistancy. Certian
Object Oriented analysis/design techniques can also help (flames from comp.religion ignored)by promoting clean interfaces and encapsulation. The key to
a reliable system is that it be designed, and that the design be well understood.
At least part of the problem is communication, we must insure that the lower level
(in the design) designers and implementors have a common understanding (and access
to) the requirements and design. This almost requires a repository or electronic
design system, as I will alledge that that which is on paper is lost.

Requireing extensive testing is perhaps inadvisable, as my reading of the literature suggests that complete testing is an impossibility. If we have designed the system correctly (probably can't prove it), verify component 
interfaces, build what is designed, and exaustively test the components, we have
 probably done the best we can. The key seems to be good technical review of the
 requirements, designs, and tests. We must also remove incentives to take short
 cuts in design implementatin, or verification.

As to ethics, even professionalism seems to be forbidden. Any real emphasiss on 
quality or good engineering is treated like other forms of whistle blowing. One
must choose between ethics and employment. Per Reganist capitalism, morality
has no place in business, only the bottom line matters. All hail 'fluffing the
wares' & Caveat Emptor.

Re design standards, I am unsure that one could define what a 'good design' is.
I suspect that we should stress a good design process and methods. This might imply that methodology should get out of the marketing wars and back in to the 
arena of intellectual discourse. It might also help if vendors implemented methods
(and methodologies) in tools, and if some data was gathered as to what techniques and processes work on what.