[net.cse] Working programs -- more warped ideas on CS education

rcj@burl.UUCP (Curtis Jackson) (03/22/86)

In article <502@ccivax.UUCP> rb@ccivax.UUCP (What's in a name ?) writes:
>This is a design problem.  The teacher should be able to write a good
>complete specification of the problem, or the students should be able
>to complete it.  The easiest specs are "here are inputs, this is what
>the outputs should be".

This brings up an interesting point -- has anyone ever had a course where
you were taught how to flesh out a poorly-written/incomplete spec?
By that I mean being taught how to analyze the spec and find places where
it is incomplete, what questions to ask to resolve the spec, how to look
for boundary and exception conditions that are not properly spec'd, etc.
Seems to me that would be a damned valuable course.  Another one would
be "user interfacing".  Too many programmers in the real world are given
a problem where they have direct access to their end-users, yet never talk
to them.  When you are asked to write an assembly language for a new piece
of hardware to be microcoded, then write an assembler for it; it doesn't
make sense to go off in a corner and come up with a language and assembler
without having constant feedback from your end-users.  Why can so few
programmers figure this out?

>Agreed, the ability to enhance a system is important.  The solution
>is simple, assign the first spec, then assign the enhancements.
>That's how it's done in the real world :-).

I also agree with this wholeheartedly.  I would also like to see students
take more initiative in coming up with enhancements without them being
assigned.  One way to give them the practice I mentioned above and accomplish
this at the same time would be to divide the class into groups of 3-6
people, with one person being the programmer/analyst and the others being
users of the "product".  The programmer could then be asked to work with
his/her users and end up with a final description of what the product
would be able to do and a basic idea of how it was to be implemented.
Students could be asked to evaluate the value of the programmer suggesting
enhancements/features, the value of the user suggesting same, and the
ways in which the interaction between the two produced a better final
product.  Ask them to think about enhancements that they (as programmers)
thought were really nifty that the users said they would never use,
and the converse.  I could ramble on forever -- you get the idea.  It seems
that spec'ing, good up-front design, and programmer/user interaction
are not stressed at all in our CS programs today.

>>Thirdly, the test-it-till-it's-perfect school fails badly.  Almost
>>any real program is entirely too complex to test comprehensively.
>
>True, but most complex systems are built from "trivial units" or
>should be.  These are quite easy to test.

Yes, and one of the things that programmer-type CS'ers must learn is a
certain sense of discipline -- unit testing goes a long way.  One good
technique I have found, which could be pounded into the students, is:
whenever your program breaks, really analyze what caused it to break.
It is far too easy to say, "Oh, I really screwed up here" without noting
in the process that this is the 15th time in a row that the screw-up
involved a case where the subroutine/function worked just fine by
itself, but you screwed up in the use of a global variable.  'Hmmmmm,
maybe it is not just a matter of you having too much to drink the night
before; maybe global variables aren't such a hot idea in general.  Now
what was that the prof said about data localization?'  That sort of thing.
Also, teach them *how* to test.  Teach them to find boundary conditions,
exception conditions.  Teach them how to look at their code for simple
things.  You have a subroutine/function that requires 4 arguments,
take the miniscule amount of time it takes to 'grep' (or the equivalent)
out every call of that function and make sure you do indeed pass it 4
arguments.

>The typical industrial approach is to test with "known good" data,
>then "known bad" data, then "pseudorandom" data.  Yes, there might
>still be a particular case where the algorythm won't work, but
>the chances of finding it in a standard functionality test are slim.
>Usually, these problems are things like "windows of error" where
>during a 5 us window something could theoretically go wrong (an
>interrupt corrupts the stack...).  Even PDP-11 C compilers have
>this problem.

In my dealings with the real world, I have found that a much more prevalent
form of error involves dealing with those nasty humans.  Programmers
who utter the phrase, "I never thought they'd do *that*!" too often should
be sent back for remedial reality training.  Users are strange beings whose
sole purpose in life is to break programs by doing utterly stupid and
incomprehensible things -- the sooner CS students learn this the better.
This one is easy to do in the classroom -- after they have completed
a programming assignment tell them to go out and run each other's programs
with the sole intent of breaking them.  Give extra credit for methods of
breaking that might actually occur in real use of the program -- the more
'normal' the method the better.  Then, on the next programming assignment,
have them write a user's manual for their product.  When you let them at
each other's throats this second time, caution that anything noted as
causing death and destruction in the user's manual is not a valid method
of breaking someone else's code -- with one exception.  It is really neat
to put caveats and bug notices in user manuals, but if the user accidentally
does one anyway and wipes out the database s/he has been maintaining for
the last six months, I wouldn't exactly say that the program is working
correctly.  No matter what the user does, catastrophic damage should not
result -- although you cannot always adhere to this it is *almost* always
plausible.

>In one course, taught by the company training department, the
>teacher had us trade programs, THEN add an enhancement to the
>working code.  We were evaluated based on how long it took
>the other person to change the original.  It's a good assignment.

I have done this in a company course as well -- very good training!!

>>In particular, Michael Jackson's approach appeals to me.  He
>>says to build the program around a model of the situation in
>>which the program lives, rather than to perform the requested
>>function.  Then the changes in requested function can be 
>>accomodated easily, since they are mere window dressing on a
>>well constructed skeleton which remains clear and clean.

Bravo to Michael Jackson (any relation?  ;-) -- I heartily agree!

>>The first thought should be to make the program obvious to the
>>casual reader.  Bugs in such programs are not the major problem
>>that they so often are in more conventional (bad) programs.
>
>I would love to tell you about all the "Beautifully Documented" programs
>that required six weeks to write and twelve weeks for the enhancer
>to understand, and required "fixing" almost weekly.  This isn't 
>as bad as badly documented code, but almost.

Dick did not say "Beautifully Documented", he said "obvious to the casual
reader".  This is more often than not a question of coding style, variable
naming conventions, coding consistency, etc. rather than a question of
documentation or commenting.
If the code is well and consistently written with minimal informative
comments it will be easy to understand -- sloppy convoluted code can
have the holy hell commented out of it and still be incomprehensible.
-- 

The MAD Programmer -- 919-228-3313 (Cornet 291)
alias: Curtis Jackson	...![ ihnp4 ulysses cbosgd allegra ]!burl!rcj
			...![ ihnp4 cbosgd decvax watmath ]!clyde!rcj