[comp.software-eng] Writing code/Basics of Program Design)

daveb@geac.UUCP (David Collier-Brown) (07/12/88)

In article <580001@hpiacla.HP.COM> scottg@hpiacla.HP.COM (Scott Gulland) writes:
>Most of the projects I have worked on have had between five and forty
>engineers working on them and take one to two years to complete.  The
>first step in all of these has been to develop the functional requirements
>and produce a psuedo user manual.  This precisely defines the desired
>results of the product and helps to drive the latter test phases.
>At the completion of this step, the team begins to consider the design
>of the product.
[followed by a good discussion of the waterfall model]

  There are two other basic approaches to a largish project, one of
which is best described as "predictor-corrector", and another best
described as a "modelling" approach.

  I'll leave discussion of predictor-corrector for later (I love it
some much I'm tempted to bias my comments), and talk a little bit
about a modelling approach.

  On a large project I was involved in recently, the design phase
consisted of building a very complete model of the processing that
would be done, and then transforming that model into self-consistent
objects which would provide the behaviors that the model (and the
real-world system) required.
  For lack of another technique, we modelled the system as a whole in
terms of data flows, thus producing an input/output processing
(procedural) model of the system, which we then broke up by
locality-of-reference into a suite of readily identifiable
"objects". We then defined their behaviors to provide all the methods
that would be needed to provide
	1) all the required services to the user
	2) all the required facilities described by the model.

  I can recommend this method for projects of moderate size (20-odd
person-years), subject to the following caveats:

	1) The requirements are complete, and can be shown to be
	achievable by prototyping or some more formal technique.

	2) The system is not "complex".  By this I mean that it does
	one well-defined thing, not a cross-product of several
	things. It can be as large ("compound") as you like, so long
	as it has a natural structure to guide the model-builders
	and is not just a collection of random kludges.

	3) The model is verifiable and validatable.  Dataflow
	diagrams are an elderly idea, but still valuable and are
	well-supported by tools and manual techniques. We therfore
	could validate the model against both requirements and the
	actual experience of experts in the field, and verify that
	it was self-consistent.  

	4) The transformation into a design is well-understood by at
	least one person on the team.  We were fortunate in having a
	manager who had a good grasp of what an "object" was, and
	staff members who could visualize a (storage) object in
	terms which were easily definable in terms of a database's
	functions.

  Use of a model, even a dataflow model, directly as a structure for
a program carries with it some nasty risks, the most notable of
which is tunnel vision... Transforming a conceptual model into an
executable model (or design) gives on the chance to look at it from
a different direction, with different assumptions.

  --dave c-b

ps: there are several other "modelling" approaches, including
    "Michael Jackson" Design and some varieties of object-oriented
    programming
-- 
 David Collier-Brown.  {mnetor yunexus utgpu}!geac!daveb
 Geac Computers Ltd.,  | "His Majesty made you a major 
 350 Steelcase Road,   |  because he believed you would 
 Markham, Ontario.     |  know when not to obey his orders"

daveb@geac.UUCP (David Collier-Brown) (07/14/88)

Lst week I said...
|  There are two other basic approaches to a largish project, one of
| which is best described as "predictor-corrector", and another best
| described as a "modeling" approach.

  At the request of Stan Osborne @ pbdoc.pacbell, I'll rant
briefly about predictor-corrector... About which I'm biased.


    The term comes from Mathematics, describing algorithms which
guess at a starting point, move to that point and then decide which
way to move to get closer to the desired answer.
    In computer science, it really means that one is trying to
hit a moving target: one predicts what **should** be the answer, and
then finds out why not.
    In either case, one can discuss it in terms of:
        1) how fast it is converging on the target,
        2) how expensive it is to do another iteration,
        3) how much one can expect to gain by doing another
            iteration.

    This means that, once you have captured a customer, one's
business managers can discuss how much we're going to make from her
in business terms ,without delving deeply into the details of
programming. After the first iteration, that is!


    Specifically, one is doing a series of prototypes, each
getting closer and closer to what the customer wants.  Consistent
with hardware prototyping terminology, we'll call the prototypes the
"breadboard", "brassboard" and "printed circuit"...
    This is a **specific kind** of prototyping approach: not all
prototyping fits this model!



The Specification Phase:
    Write a breadboard prototype to test out the user interface
and determine the specifications.  Don't consider either efficiency
or elegance. This one you're unconditionally going to throw away, so
don't waste money!

    Make sure that it really does what the customer wants (well,
thinks she wants). Keep trying until you do. Don't stop early!


The Design Phase:
    Now go away into a back room and design a brassboard.
Find the architecture that bests maps the task into an achievable
implementation. Use whatever mental tools you like, but plan for
evolving the program, especially storage structures (using a
database is a REAL good idea here), protocols and interfaces.
Try to break up the program into three layers: user interface,
application proper and core facilities.  Plan to write the interface
and core facilities as reusable/replaceable independent parts.

Implementation Phase 1: 
    Write the brassboard for alpha-testing, in parallel to the
customers existing (manual) system.  Make the implementation simple:
The Unix kernel is full of **very simple** techniques, even though
the authors knew the harder, faster ones.  Follow their example!


Validation & Verification 1:
    There is no real "unit test/integration test": all in-house
tests are integration (verification) tests, all demos are validation
tests.  Spend time building scripts and test harnesses: you will
have to re-run most tests every time you fix something, just to be
sure you're not headed towards a maintenance nightmare.

    At the end of this, you should have a customer who says "It
works ok, but its too big and too slow. When do we get the next
release?" If you get that question, you've hit the jackpot!


Implementation Phase 2:
    You're writing and releasing versions 1.2 through 1.999,
fixing specific performance problems and misfeatures.  You're also
seeing what's wrong with the design,  Don't try to redesign on the
fly, save that for version 2.0, but do record what you do/don't like.

Validation & Verification 2:  
    Test, and record bug ratios so as to make sure that you're
not going backwards.  Make sure that the number of bugs is
monotonically decreasing and is decreasing fast enough.

Implementation Phase 3:
    "Release 2.0".  Now you exit the prediction-correction cycle
and use a more formal method to review what you achieved, what you
want to change and add. And then do it.  But do consider any
new/experimental features as individual candidates for the
predictor-corrector approach.
    

Weaknesses of this Technique:
    1) It works best on hard, poorly understood or politically
       dangerous problems,
    2) It assumes that specifications are hard/impossible to
       write and validate,
    3) The customer must want a short time scale for first
       product, and be prepared to live with an "early release"
       to get something functioning,
    4) It typically produces code which is a wonderful blend of
       brilliance and sheer stupidity, so you can't just hand it
       over to an underpaid "maintenance" group after the first
       release, you have to keep the lead developers working at
       it, and
    5) Without active involvement of management and the customer,
       it can into mindless hacking...


Conclusion:
    Its like the girl with the curl: when she's good, she's
very very good, but when she's bad, she's horrid.

--dave (I love this technique, I use it **inappropriately** all the
        time...) c-b

-- 
 David Collier-Brown.  {mnetor yunexus utgpu}!geac!daveb
 Geac Computers Ltd.,  |  Computer science loses its
 350 Steelcase Road,   |  memory, if not its mind,
 Markham, Ontario.     |  every six months.