[comp.software-eng] Soft-Eng Digest v5n19

soft-eng@MITRE.ARPA (Alok Nigam) (07/17/88)

Soft-Eng Digest             Sat, 16 Jul 88       V: Issue  19         

Today's Topics:
                  Basics of Program Design (8 msgs)
           Writing code/Basics of Program Design) (2 msgs)
    Writing code by hand (was: Basics of Program Design) (4 msgs)
----------------------------------------------------------------------

Date: 30 Jun 88 23:56:15 GMT
From: decvax!eagle_snax!gmcgary@bloom-beacon.mit.edu  ( Sun ECD Software)
Subject: Basics of Program Design

Pick up a copy of _Programmers_At_Work_(1st_Series)_ by Susan Lammers
published by Microsoft Press.  This is a collection of interviews with
famous and not-so-famous programmers such as Dan Bricklin, Gary
Kildall, Bill Gates, etc...

The person whose work habits I most identified with was Gary Kildall.
I don't know if he produces good software, since I've never seen his
work, I only know that his statements in the interview echo my own
feelings about the programming process.

In the early phases of a project, I tend to think about the problem
a lot at odd moments such as while commuting, or showering.  I also
haunt the research libraries of my local major university starting
the Computer and Control Abstracts and then tracking references once
I find some good papers initially.  If I get stuck on something, I
just let go of it and concentrate on something else.  I trust my
sub-conscious to figure things out.

I spent a number of my formative years reading program-design books,
particularly Glenford Myers's _Composite/Structured_Design_ which I
still re-read every couple of years.  Although this book presents
a design methodology using boxes and arrows and stuff like that, I
rarely use it except when I get stuck.  By now, I know these
design principles intimately, so I can tell as I am coding whether or
not I am creating modules that have the desirable design properties
(low coupling, high strength/cohesion)

I seldom bother with pseudo-code.  I find that legal C-code with long,
descriptive identifier & function names with occasional comments
express algorithms as readably and straightforwardly as pseudo-code.

I could go on and on, but this should give you a feeling of what works
for me...

------------------------------

Date: 1 Jul 88 03:08:40 GMT
From: att!lzaz!lznv!psc@bloom-beacon.mit.edu  (Paul S. R. Chisholm)
Subject: Basics of Program Design

< "Would you buy a used operating system from these guys?" >

In article <1559@microsoft.UUCP>, alonzo@microsoft.UUCP (Alonzo Gariepy) writes:
>In article <900@td2cad.intel.com> brister@td3cad.UUCP (James Brister) writes:
> What I mean is: do experienced programmers usually write flow charts,
> or pseudo code.

An anecdote from CS 701 (any other UW-Madcity alumni remember the
graduate compiler course?)  In the last week or two, the professor
(Marvin Solomon, great teacher) said we had to turn in some external
documentation.  (Right before the project was due; hey, I guess he was
teaching us about the real world.-)  Someone asked, "Should we go back
and write flowcharts?"

The entire class broke out into hysterical laughter.

No, I don't think anyone writes flow charts any more.

>                 Do they write the whole program out by hand first or
> build it up in their favorite editor?

I envy the heck out of people who can design in their head, keeping
just the right amount of detail, then type the program in top-down.

> I do.  Write code by hand sometimes, that is.

Me, too.  I can't get a design right the first time.  It takes me two
or three attempts.  I need those scribbled-on pieces of paper, with
goofs lightly crossed out, as places to make mistakes that I can throw
out!  Writing with an editor just isn't the same.  Material that I
wrote and changed is gone forever.  Worse, it's harder to throw out a
compilable bad effort.  And the temptation to sweat details too early
is enormous when the compiler is right *there*. . . .

Besides, it gets cold out by the PC.  If I bring my notebook to bed and
scribble there, I stay comfy and warm. (And so does my wife.)

------------------------------

Date: 4 Jul 88 05:46:58 GMT
From: ucsdhub!hp-sdd!nick@ucsd.edu  (Nick Flor)
Subject: Basics of Program Design

In article <900@td2cad.intel.com> brister@td3cad.UUCP (James Brister) writes:
>For the past few years my professors have rambled on about data
>structures, compiler design and more, ad nauseam. But none of them
>have ever mentioned the best way, (or any way really) to go about
>sitting down and actually writing a program.
>
>What I mean is: do experienced programmers usually write flow charts,
>or pseudo code. Do they write the whole program out by hand first or
>build it up in their favorite editor?

Yikes.  It is kind of scary reading some of the responses to your posting James.
Seems like there are a lot of programmers employing ad hoc methods in the
development of their software.  Reminds me of a story Professor W. Howden
told us about a conference he attended where a speaker said something like
"Newton said 'if I have seen farther, it is only because I've stood on
the shoulders of giants'.  The problem with programming is that everyone's
stepping on each others toes'".

But I digress.  Anyways, let me see if I can shed some light on the topic:

To understand what tools are needed to develop a program, one first needs
to understand the techniques humans use to solve complex problems.
For things that are too complex, humans use two main techniques:

1) Abstraction
2) Decomposition

When developing a large program whose entirety you cannot fathom off the top
of your head, the best thing to do is to decompose it into a bunch of
more manageable abstract entities.  After more thinking, you can then
decompose these abstract items even further, until you have a concrete
realization of the solution, ie. code.

Programs consist of a variety of functions working on a variety of data.
Most languages support functional abstraction, but not data abstraction.
This is why you cannot just sit down and start programming for languages
like 'C' and Pascal.

Good tools for software development are those that support both functional
and data abstraction in addition to provisions for decomposition of
both functions and data.

The most common tools used today for software engineering are
dataflow diagrams & structure charts.  But by no means are they
the best.

Anyways, I could go on and on and on.  Your best bet is to read a
book on software engineering.

Hope this helps.. and let's have more discussion on this subject.

------------------------------

Date: 5 Jul 88 19:54:32 GMT
From: uwslh!lishka@speedy.wisc.edu  (Fish-Guts)
Subject: Basics of Program Design

In article <1392@lznv.ATT.COM> psc@lznv.ATT.COM (Paul S. R. Chisholm) writes:
>< "Would you buy a used operating system from these guys?" >
>
>In article <1559@microsoft.UUCP>, alonzo@microsoft.UUCP (Alonzo Gariepy) writes:
>>In article <900@td2cad.intel.com> brister@td3cad.UUCP (James Brister) writes:
>> What I mean is: do experienced programmers usually write flow charts,
>> or pseudo code.
>
>An anecdote from CS 701 (any other UW-Madcity alumni remember the
>graduate compiler course?)
                                    ^^^^^^^^^^
     Hey, nice to see a familiar school mentioned!  I hope to be
taking the course in the fall semester.

>No, I don't think anyone writes flow charts any more.

     It depends what you mean by flow-charts.  A point that I haven't
seen mentioned yet is that different programmers see programs in
different ways.  A partner and I were working on a natural-language
parser/generator, and I found that one of the biggest differences
between us was: she thought of code in terms of written text (i.e.
pseudo-code, real-code, whatever), whereas I can only think of
programs by "visualizing" them (by writing out flowcharts, pictures of
data-structures, whatever).  I found that she couldn't think in
pictures, and her textual forms of representation were just so many
words to me.  After we figured that out, it was a lot easier working
with one another, and we created a good set of programs together.

     At my *real* job I flow chart every now and then, but only for
complex parts of large programs.  I would hate to flow chart a 10,000
line C program in full!  (It isn't worth the effort to me to go that
far).  However, I draw a hell of a lot of diagrams of just about
anything that I need to clarify...it helps me, and it helps the people
I am writing these programs for, because I have found that they rarely
think in terms of "real" or "pseudo" code (most of them are not
programmers).  Flow charting has its place; same as pseudo-coding and
whatever else.  Some people still use it.

>Me, too.  I can't get a design right the first time.  It takes me two
>or three attempts.  I need those scribbled-on pieces of paper, with
>goofs lightly crossed out, as places to make mistakes that I can throw
>out!  Writing with an editor just isn't the same.  Material that I
>wrote and changed is gone forever.  Worse, it's harder to throw out a
>compilable bad effort.  And the temptation to sweat details too early
>is enormous when the compiler is right *there*. . . .

     Odd...I never write down code by hand.  I always edit a printout
if I need to see more than a screenful at a time.  Different strokes....

     Although I can't think of any foolproof design method, I have a
suggestion (for the original poster): use good tools.  I use programs
like RCS and Gnu-Emacs, so that I can change code and usually have a
method of undo'ing my additions.  It is not fun to learn this the hard
way!  Also, tools like profilers and symbolic-debuggers really help
when you are stuck or are fine tuning...no matter what strategy you use.

>Besides, it gets cold out by the PC.  If I bring my notebook to bed and
>scribble there, I stay comfy and warm. (And so does my wife.)

     Designing programs in bed can be fun, but my girlfriend really
hates it when I have finally settled down to sleep, only to exclaim
"hey, wait a minute!" and go back to more designing.  Insights come at
the worst moments sometimes!

>-Paul S. R. Chisholm, {ihnp4,cbosgd,allegra,rutgers}!mtune!lznv!psc

     The above are what I have learned in school and by making a lot
of mistakes at my work.  I would be glad to continue this topic with
anyone (esp. the original poster) over email.

------------------------------

Date: 6 Jul 88 05:20:35 GMT
From: ucsdhub!hp-sdd!nick@sdcsvax.ucsd.edu  (Nick Flor)
Subject: Basics of Program Design

In article <349@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>     At my *real* job I flow chart every now and then, but only for
>complex parts of large programs.  I would hate to flow chart a 10,000
>line C program in full!  (It isn't worth the effort to me to go that
>far).  However, I draw a hell of a lot of diagrams of just about
>anything that I need to clarify...it helps me, and it helps the people
>I am writing these programs for, because I have found that they rarely
>think in terms of "real" or "pseudo" code (most of them are not
>programmers).  Flow charting has its place; same as pseudo-coding and
>whatever else.  Some people still use it.

Glad to see more discussion on design.  Let me put in my 2 cents
worth about flow charts:

Flow charts show flow of control.  As such, they are more useful for
documenting detailed parts of a program, e.g a complex algorithm.
They are generally not useful in high level design, where you are trying
to figure out what functions and data must be created.  Again, dataflow
diagrams would be more useful during initial design.

Flow charts should primarily be used in the detailed design phase.

Speaking of which, for those of you who have not taken a software
engineering course, one of the most common software life cycle models
is the phased life cycle model.

Briefly, there are 5 phases

1) Requirements -- in this phase you specify what you want the software
   to do without regard to details about how it should be done.
2) Design -- general and detailed design of the software.
3) Implementation -- coding the design up.
4) Validation -- making sure the implementation conforms to the specifications
5) Maintenance -- bug fixes, enhancements.

Supposedly you can go from one phase to the next, much like a production line,
without going back, but in practice this isn't the case.

I guess dataflow diagrams can be used in phases 1&2.  Structure charts,
2D diagrams, etc. in phase 2.  Flow charts/pseudocode in the
detailed design phase (phase 2.5).  Good editors in phase 3.
Static and dynamic analyzers, i.e. debuggers etc., in phase 4.

>     Odd...I never write down code by hand.  I always edit a printout
>if I need to see more than a screenful at a time.  Different strokes....

Yes.  Why write it twice if you have a good editor?  It's infinitely easier
to type 'dd' to erase a line in VI, than to use a pencil eraser.

Arguments welcome,

------------------------------

Date: 6 Jul 88 12:17:18 GMT
From: ut-emx!chpf127@sally.utexas.edu  (J. Eaton)
Subject: Basics of Program Design

In article <1335@hp-sdd.HP.COM>, nick@hp-sdd.HP.COM (Nick Flor) writes:
> Yes.  Why write it twice if you have a good editor?  It's infinitely easier
> to type 'dd' to erase a line in VI, than to use a pencil eraser.
> Arguments welcome,

    Almost true (infinitely?), but is VI really a good editor?
    I think it's easier to use Emacs for that (and many other) tricks :-)

------------------------------

Date: 7 Jul 88 22:11:45 GMT
From: tektronix!teklds!amadeus!karenc@bloom-beacon.mit.edu  (Karen Cate;6291502;92-734;LP=A;60.D)
Subject: Basics of Program Design

In article <1335@hp-sdd.HP.COM> nick@hp-sdd.UUCP (Nick Flor) writes:
>In article <349@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>>
>>     At my *real* job I flow chart every now and then, but only for
>>complex parts of large programs.  I would hate to flow chart a 10,000
> ...
>
>Flow charts should primarily be used in the detailed design phase.

     This is fun.  Here's my $.02:

     I have used "hacked-up" versions of both data flow diagrams and
     flow charts.  Looking back, I used whatever made the most sense
     out of my ideas (i.e. whatever was easiest at the time).  For
     example, I took a Software Engineering class my senior year.  The
     major project was to design some software system.  We didn't have
     to write the code, just design the spec's using a system the
     instructor liked.  That included data flow diagrams.

     A couple of people in my group were interested in the stock market,
     so we decided to write what we called a stock portfolio manager.
     When it came time to generate data flow diagrams, we had a lot of
     trouble.  Essentially we had a database of stock information that
     was either entered by hand, or automatically downloaded from another
     system (part of Compuserve, or something).  The rest of the "package"
     were just "filters" that plotted or graphed the data from a menu-
     driven user interface.

     We did our best guess, which looked something like a broom:  one
     input, a bubble, and a bunch of "parallel" outputs (which roughly
     corresponded to each menu option).  We took that to the instructor
     with the comment that we felt our system would be more suited to
     control flow diagrams (essentially a flow chart).  The instructor
     agreed, so we based the rest of our design work on those.

     Another group tried to automate a restaurant with a system similar
     to what many grocery stores have (with a few basic modifications).
     This application is particularly adapted to data-flow.  It's purpose
     is to gather and disseminate data to and from several places...
     I guess our application moved the user around to look at the data
     from various angles...

>>     Odd...I never write down code by hand.  I always edit a printout
>>if I need to see more than a screenful at a time.  Different strokes....
>
>Yes.  Why write it twice if you have a good editor?  It's infinitely easier
>to type 'dd' to erase a line in VI, than to use a pencil eraser.
>
     At least I'm not alone out here with my bad memory.  I also tend
     to do a lot of design on paper, before I start typing.  For recent
     projects I've been using one pile of paper for code segments (I
     have been solving code intricacies rather than conceptual ones),
     another pile for data structures/variable names, and I use my
     terminal to look up documentation (socket man pages, etc).  If
     there's not enough information there, there's always the telephone...

     The other advantage to this system is that at least two thirds of
     it is portable.  I can sit at home with dogs and husband so that I
     can see them once in a while...  But that's another topic.
>
>Arguments welcome,

     We don't argue, we just forcibly state our opinions!

------------------------------

Date: 11 Jul 88 06:24:15 GMT
From: ucsdhub!hp-sdd!nick@ucsd.edu  (Nick Flor)
Subject: Basics of Program Design

In article <3675@teklds.TEK.COM> karenc@amadeus.UUCP (Karen Cate) writes:
>     We did our best guess, which looked something like a broom:  one
>     input, a bubble, and a bunch of "parallel" outputs (which roughly
>     corresponded to each menu option).  We took that to the instructor
>     with the comment that we felt our system would be more suited to
>     control flow diagrams (essentially a flow chart).  The instructor
>     agreed, so we based the rest of our design work on those.        /

Although it was easier for you to model your system in terms of control
flow, let me ask you how you made the transition from control flow
diagrams to functions and data. i.e. from the control flow diagrams, how did
you determine what functions and data types you needed in your program.

Computer languages allow you to write functions that operate on data.
Although control flow diagrams show you the sequence of events that occur,
and can give you a good intellectual feel for how the program works,
they don't lend themselves well for decomposition into abstract functions
and data types that you can write a program for; especially if you are still
in the design phase.  Again, I don't find anything wrong with control flow
diagrams in the detailed design phase, but in the high level, i.e. general,
design phase I'm not convinced of their usefulness.

then again, I could be wrong...

------------------------------

Date: 12 Jul 88 13:33:15 GMT
From: mnetor!spectrix!yunexus!geac!daveb@uunet.uu.net  (David Collier-Brown)
Subject: Writing code/Basics of Program Design)

In article <580001@hpiacla.HP.COM> scottg@hpiacla.HP.COM (Scott Gulland) writes:
>Most of the projects I have worked on have had between five and forty
>engineers working on them and take one to two years to complete.  The
>first step in all of these has been to develop the functional requirements
>and produce a psuedo user manual.  This precisely defines the desired
>results of the product and helps to drive the latter test phases.
>At the completion of this step, the team begins to consider the design
>of the product.
[followed by a good discussion of the waterfall model]

  There are two other basic approaches to a largish project, one of
which is best described as "predictor-corrector", and another best
described as a "modelling" approach.

  I'll leave discussion of predictor-corrector for later (I love it
some much I'm tempted to bias my comments), and talk a little bit
about a modelling approach.

  On a large project I was involved in recently, the design phase
consisted of building a very complete model of the processing that
would be done, and then transforming that model into self-consistent
objects which would provide the behaviors that the model (and the
real-world system) required.
  For lack of another technique, we modelled the system as a whole in
terms of data flows, thus producing an input/output processing
(procedural) model of the system, which we then broke up by
locality-of-reference into a suite of readily identifiable
"objects". We then defined their behaviors to provide all the methods
that would be needed to provide
        1) all the required services to the user
        2) all the required facilities described by the model.

  I can recommend this method for projects of moderate size (20-odd
person-years), subject to the following caveats:

        1) The requirements are complete, and can be shown to be
        achievable by prototyping or some more formal technique.

        2) The system is not "complex".  By this I mean that it does
        one well-defined thing, not a cross-product of several
        things. It can be as large ("compound") as you like, so long
        as it has a natural structure to guide the model-builders
        and is not just a collection of random kludges.

        3) The model is verifiable and validatable.  Dataflow
        diagrams are an elderly idea, but still valuable and are
        well-supported by tools and manual techniques. We therfore
        could validate the model against both requirements and the
        actual experience of experts in the field, and verify that
        it was self-consistent.

        4) The transformation into a design is well-understood by at
        least one person on the team.  We were fortunate in having a
        manager who had a good grasp of what an "object" was, and
        staff members who could visualize a (storage) object in
        terms which were easily definable in terms of a database's
        functions.

  Use of a model, even a dataflow model, directly as a structure for
a program carries with it some nasty risks, the most notable of
which is tunnel vision... Transforming a conceptual model into an
executable model (or design) gives on the chance to look at it from
a different direction, with different assumptions.

  --dave c-b

ps: there are several other "modelling" approaches, including
    "Michael Jackson" Design and some varieties of object-oriented
    programming

------------------------------

Date: 14 Jul 88 15:48:25 GMT
From: mnetor!spectrix!yunexus!geac!daveb@uunet.uu.net  (David Collier-Brown)
Subject: Writing code/Basics of Program Design)

Lst week I said...
>  There are two other basic approaches to a largish project, one of
> which is best described as "predictor-corrector", and another best
> described as a "modeling" approach.

  At the request of Stan Osborne @ pbdoc.pacbell, I'll rant
briefly about predictor-corrector... About which I'm biased.

    The term comes from Mathematics, describing algorithms which
guess at a starting point, move to that point and then decide which
way to move to get closer to the desired answer.
    In computer science, it really means that one is trying to
hit a moving target: one predicts what **should** be the answer, and
then finds out why not.
    In either case, one can discuss it in terms of:
        1) how fast it is converging on the target,
        2) how expensive it is to do another iteration,
        3) how much one can expect to gain by doing another
            iteration.

    This means that, once you have captured a customer, one's
business managers can discuss how much we're going to make from her
in business terms ,without delving deeply into the details of
programming. After the first iteration, that is!

    Specifically, one is doing a series of prototypes, each
getting closer and closer to what the customer wants.  Consistent
with hardware prototyping terminology, we'll call the prototypes the
"breadboard", "brassboard" and "printed circuit"...
    This is a **specific kind** of prototyping approach: not all
prototyping fits this model!


The Specification Phase:
    Write a breadboard prototype to test out the user interface
and determine the specifications.  Don't consider either efficiency
or elegance. This one you're unconditionally going to throw away, so
don't waste money!

    Make sure that it really does what the customer wants (well,
thinks she wants). Keep trying until you do. Don't stop early!


The Design Phase:
    Now go away into a back room and design a brassboard.
Find the architecture that bests maps the task into an achievable
implementation. Use whatever mental tools you like, but plan for
evolving the program, especially storage structures (using a
database is a REAL good idea here), protocols and interfaces.
Try to break up the program into three layers: user interface,
application proper and core facilities.  Plan to write the interface
and core facilities as reusable/replaceable independent parts.

Implementation Phase 1:
    Write the brassboard for alpha-testing, in parallel to the
customers existing (manual) system.  Make the implementation simple:
The Unix kernel is full of **very simple** techniques, even though
the authors knew the harder, faster ones.  Follow their example!


Validation & Verification 1:
    There is no real "unit test/integration test": all in-house
tests are integration (verification) tests, all demos are validation
tests.  Spend time building scripts and test harnesses: you will
have to re-run most tests every time you fix something, just to be
sure you're not headed towards a maintenance nightmare.

    At the end of this, you should have a customer who says "It
works ok, but its too big and too slow. When do we get the next
release?" If you get that question, you've hit the jackpot!


Implementation Phase 2:
    You're writing and releasing versions 1.2 through 1.999,
fixing specific performance problems and misfeatures.  You're also
seeing what's wrong with the design,  Don't try to redesign on the
fly, save that for version 2.0, but do record what you do/don't like.

Validation & Verification 2:
    Test, and record bug ratios so as to make sure that you're
not going backwards.  Make sure that the number of bugs is
monotonically decreasing and is decreasing fast enough.

Implementation Phase 3:
    "Release 2.0".  Now you exit the prediction-correction cycle
and use a more formal method to review what you achieved, what you
want to change and add. And then do it.  But do consider any
new/experimental features as individual candidates for the
predictor-corrector approach.


Weaknesses of this Technique:
    1) It works best on hard, poorly understood or politically
       dangerous problems,
    2) It assumes that specifications are hard/impossible to
       write and validate,
    3) The customer must want a short time scale for first
       product, and be prepared to live with an "early release"
       to get something functioning,
    4) It typically produces code which is a wonderful blend of
       brilliance and sheer stupidity, so you can't just hand it
       over to an underpaid "maintenance" group after the first
       release, you have to keep the lead developers working at
       it, and
    5) Without active involvement of management and the customer,
       it can into mindless hacking...


Conclusion:
    Its like the girl with the curl: when she's good, she's
very very good, but when she's bad, she's horrid.

--dave (I love this technique, I use it **inappropriately** all the
        time...) c-b

------------------------------

Date: 6 Jul 88 13:41:51 GMT
From: att!lzaz!lznv!psc@bloom-beacon.mit.edu  (Paul S. R. Chisholm)
Subject: Writing code by hand (was: Basics of Program Design)

In article <1335@hp-sdd.HP.COM>, nick@hp-sdd.HP.COM (Nick Flor) writes:
> It's infinitely easier to type 'dd' to erase a line in VI, than to
> use a pencil eraser.

Exactly why I *don't* use vi in the design stage!  I've got a terrible
memory.  Managing complexity is something I've got to be very
conscientious about.  My first hack at a top-down design is usually
nowhere near right.  With an editor, I'd just change the lines.  On
paper, I either cross out a couple of lines, or just start on a fresh
page.  (Bound notebooks are useful, because I don't misplace the old
design.)  This way, when I'm a third of the way through, I can very
easily pick up the good parts of the previous design attempt.

SCCS and such don't help in this phase.  I hate logging a mistake in
the source database.  An operating system that stores the last n
versions of a file can help, sort of.  You often find that the old
stuff you wanted was in n+1 versions ago.  This fear discourages you
from saving your work often, which can have other bad effects.

And designing on paper has one great advantage:  I can write about
other stuff in the margins.

------------------------------

Date: 5 Jul 88 18:21:03 GMT
From: hpda!hpdslab!hpiacla!scottg@bloom-beacon.mit.edu  (Scott Gulland)
Subject: Writing code by hand (was: Basics of Program Design)

> For the past few years my professors have rambled on about data
> structures, compiler design and more, ad nauseam. But none of them
> have ever mentioned the best way, (or any way really) to go about
> sitting down and actually writing a program.  ... etc.

This is one area in which our educational institutions are very weak.
Most of the projects I have worked on have had between five and forty
engineers working on them and take one to two years to complete.  The
first step in all of these has been to develop the functional requirements
and produce a psuedo user manual.  This precisely defines the desired
results of the product and helps to drive the latter test phases.
At the completion of this step, the team begins to consider the design
of the product.

The types of projects I work on involve real time applications for the
factory floor environment.  These applications always contain multiple
programs (processes) which need to operate in a semi-synchronous fashion
(some processes are synchronized, some aren't), but are typically driven
by asynchronous events.  Note that other kinds of applications may require
entirely different approaches to design.  Now that you have a good idea on
the types of projects I work on, here is how I approach design.

As the design of the functional requirements comes to an end, I begin to
work on how to design the problem in my head.  I examine various approaches
to the problem spending a week or two just thinking about how each might
be implemented.  This is essentially a brain-storming exercise.  By the
time the functional requirements are complete, I usually have isolated in
my mind the best and most promising design approaches to the problem at
hand.

This technique involves considering any idea, no matter how bizzare, without
prejudging its suitability as a solution.  Each idea or approach is mentally
evaluated with respect to design tradeoffs (eg. benifits vs detrements).
Over time and much thought, some of these ideas begin to float to the
surface as being most appropriate for the problem being worked on.

I would like to make a couple of comments about this technique for the
benifit of new engineers.  Effective use of the technique requires
extensive knowledge of the design tradeoffs for an operating system/
architecture as they apply to a specific class of applications.  This
in turn requires considerable development experience before you will
know to make these tradeoffs.  Therefore, I would not recommend that
new engineers attempt to use this technique on complex projects.

We then begin the formal part of design.  On a small project the entire
team participates while on a large project the top four or five technical
engineers are chosen to formulate a high-level architecture.  The design
of the architecture involves breaking the problem into major modules
and identifying the data flows between the modules.  Note that the data
flows usually do not contain a great amount of detail but are defined
at the conceptual level.

Following the initial draft of the high-level design, we begin to define
in detail actual critical data items flowing between modules.  We also
break each module into its component subfunctions and some of the major
data elements used within the module.  Note that throughout this whole
process we evaluate design tradeoffs and brainstorm different approaches.
After several iterations of the above we arrive at a fairly solid
high-level architecture for the problem.

Following the high-level design, we perform schedule estimates for each
module and submodule.  Based on these schedule estimates we split up
and distribute the components of the architecture to the entire team
for detailed design work.  Each member of the team is responsible for
coming up with a detailed design for their assigned areas.  Note that
in large projects this may invovle further sub-teams.  In any event,
the team must possess excellent communication skills amongst themselves
to insure that all modules integrate cleanly together.

As I begin detailed design of my individual modules, I will go through
the same mental brainstorming technique outline above only in much finer
detail.  After choosing the basic approach to apply to each module, I
define the data structures being passed into and out of the module.
I also attempt to define as completely as possible the major global
data structures used within the module.

At this point, if the module is fairly complex, I will use a variety of
techniques to simplify it.  Many times this will be psuedo-code.  The
team will then hold a series of design reviews for each of the individual
modules.  The design reviews usually catch most of the design defects
and greatly aid the integration of modules once they are coded.
We are also currently look at some of the new structured design tools
comming out, such as IDE and Teamwork.  However, it is still too early
to tell how effective these are in aiding the design process.

As we finish coding each module, we attempt to perform unit testing
to what ever extent is possible.  In large modules we will frequently
unit test each function within the module as we complete it.  Note
that unit testing attempts to find defects in the functionality of
the module as defined during design.

------------------------------

Date: 7 Jul 88 13:53:16 GMT
From: uwslh!lishka@speedy.wisc.edu  (Fish-Guts)
Subject: Writing code by hand (was: Basics of Program Design)

In article <1398@lznv.ATT.COM> psc@lznv.ATT.COM (Paul S. R. Chisholm) writes:
>< "Would you buy a used operating system from these guys?" >
>Exactly why I *don't* use vi in the design stage!  I've got a terrible
>memory.  Managing complexity is something I've got to be very
>conscientious about.  My first hack at a top-down design is usually
>nowhere near right.  With an editor, I'd just change the lines.  On
>paper, I either cross out a couple of lines, or just start on a fresh
>page.  (Bound notebooks are useful, because I don't misplace the old
>design.)  This way, when I'm a third of the way through, I can very
>easily pick up the good parts of the previous design attempt.

     Not to pick on VI, but this is why I use gnu-emacs.  It has
"infinite" undo.  If you make a *big* mistake you can undo to your
heart's content.  I have done quite a few times.  Also, gnu-emacs has
different buffers, so if I need to save a piece of the file I am
editting, I just yank out the code to be changed and safe in a spare
buffer for the time being.  Same with backup files...keep a backup of
previous code that you might need to revert to.

>SCCS and such don't help in this phase.  I hate logging a mistake in
>the source database.  An operating system that stores the last n
>versions of a file can help, sort of.  You often find that the old
>stuff you wanted was in n+1 versions ago.  This fear discourages you
>from saving your work often, which can have other bad effects.

     I am not sure about SCCS (I have never used it), but RCS has a
tree-like structure for revisions.  Mr. Tichy wrote a good article on
it, which describes the system better than the documentation
available; I can get the reference if you are interested.

>And designing on paper has one great advantage:  I can write about
>other stuff in the margins.

     So true.  Now that would be a *really* good innovation for
computers...margins to write things in (in editors).  Maybe I'll hack
up gnu-emacs to do this ;-)

------------------------------

Date: 8 Jul 88 04:24:16 GMT
From: sun.soe.clarkson.edu!mrd@tcgould.tn.cornell.edu  (Mike DeCorte)
Subject: Writing code by hand (was: Basics of Program Design)

   You often find that the old
   stuff you wanted was in n+1 versions ago.  This fear discourages you
   from saving your work often, which can have other bad effects.

In gnu emacs, there are the following variable that *you* can
set.

make-backup-files (nil or non-nill)
kept-old-versions (a number, default 2)
kept-new-versions (a number, default 2)

Using this you can choose n.  If you like you could make n=# of files you
can put on our disk.  I wouldn't advise it, but you could. :-)

------------------------------

End of Soft-Eng Digest
******************************