[comp.software-eng] Specification Tools and Code Testing

doug@nixtdc.uucp (Doug Moen) (08/13/90)

cox@stpstn.UUCP (Brad Cox) writes:
)My IEEE Software paper (Nov 1990) goes into the distinction between 
)specification and implementation in some detail...
)
)Where inheritance really belongs, and where it offers its greatest
)potential, is in the (as-yet-nonexistent) specification tools; i.e.
)specification language "compilers" that compile an input notation, the
)specification, into but *tests*; executable code, or "gauges", that determine
)(by testing) whether a given implementation is within tolerance to its 
)specification.
)
)In case this isn't obvious, the meaning I'm using for "specification"
)includes not only *static* specifications (i.e. what method names/types
)are listed in some interface file; i.e. class Stack has methods push and pop),
)but *dynamic* specifications (i.e.  if I push 1,2,3 on an instance of class
)Stack, pop should return 3,2,1).

gregk@cbnewsm.att.com (gregory.p.kochanski) writes:
>Won't the process of writing the specifications be as complex as writing
>the program, if you want to specify things in such detail that all
>operations of a class can be tested?

My group is currently in the test phase of developing a large document
image processing system.  This is a commercial project, and we are spending
a great deal of effort in testing to avoid releasing a buggy product.
What we are discovering is that testing a big system is *hard*, and
that having a sound testing methodology is crucial.

A specification language, together with a compiler that automatically
generates test code, as Brad describes, would be tremendously useful
for us.  I don't think it matters that writing the specifications is
as labour intensive as writing the original code; we are already spending
a lot more time in testing and documentation than in coding already.

Does anyone have references for testing methodologies
or automated testing tools for large systems?

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (08/17/90)

> testing a big system is *hard*

It shouldn't be "hard", but it is a major cost factor.

Some general comments on testing large systems:

1.  There are various stages and types of testing, ranging from "unit testing"
to systems testing.  Testing and the resources required must be part of
the initial planning.  Designing the system for testability is very
important.

2.  During the requirements (or product) definition phase, each requirement
(or feature) should have an associated means of testing it defined
or suggested.  (Subject to latter revision, of course).

3.  Most phases of testing should be conducted by a separate test
organization.
This test organization should include some senior people; one of the most
common mistakes is assuming that testing (and Q/A) is something that only
requires a BS/CS and a couple of years of experience.

4.  An effective configuration managment and reporting system is critical
to the success of a large project.  It must be automated, and should
provide *support* to all groups concerned.

5.  Automated testing tools are a necessity; manual inputs should be
eliminted wherever possible.  We have built numerous tools for testing
large systems over the years.  We have built testing languages which
generate test code; however, I can't think of any case where a
"specification language" would have been much use (see above).  The
amount of work required to write a rigourous specification would
indeed be roughly equal to that required to write the deliverable code.
It would be more productive for such a specification language (SL) to generate
the end code instead of "test" code.

6.  Despite all the emphasison testing, the single most effective
technique for eliminating and preventing defects is "inspection".

There is an excellent book from Auerbach called "A Standard for
Application Testing" or something similar (I don't have it handy
at the moment); although oriented toward more classic DP COBOL
shops, it contains a thorough discussion of testing which is
applicable to any type of project.  Also, look for articles by
William Howden in IEEE Transaction on Software Engineering circa
1986-1988.  (I'll try to post a more complete bibliography in
a few days).

Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

rh@smds.UUCP (Richard Harter) (08/19/90)

Here are just a few notes on testing in addition to Jeffrey M.
Jacobs execellent article on testing.

(1)	There are at least three rather general categories of testing:
specification testing, performance testing, and pathology testing.
Each category requires different kinds of testing techniques and scripts,
and each has different objectives.  Specification testing is the most
straightforward of these, at least in principle, since (given good
specs) one can simply check to see that the output of the software
matches the specs.  In practice the problem is that software specifications
are often determined ex post facto.  The major problem in performance
testing is simply to determine what it is that should be measured.

	The hardest type of testing is pathology testing -- i.e.
beat on the software in a variety of ways to reveal unexpected bugs.
This is an area where experience pays.  Essentially what you are trying
to find are errors that were not found in the development process.
Some of these are the usual logic errors -- fence post problems,
boundary problems, unchecked system error returns, etc.  However
the most perplexing and hardest to evoke are what I call mystery bugs.
Typically these come from storage overwrite problems.  They manifest
only under specific combinations of conditions and the resulting errors
appear later in the course of execution without apparent connection
to the original error.  Some languages, C and Fortran among them,
are particularly prone to these errors.

(2)	Good instrumentation in the software is very important.
My view is that instrumentation should be hard wired in so that the
delivered software is the same as the tested software.  Examples:
Every module contains a counter that counts the number of times
it is called.  The software contains a call history trace scheme.
Storage allocation is encapsulated with overwrite checking on
boundaries.  (I would very much like to see a discussion of software
instrumentation techniques.)  The software should be structured so
that the testing procedures have access to the internal instrumentation
data.

(3)	Testing procedures should be under configuration management.
They should also be tied to change control.  Ideally there should
be a test procedure associated with every functional change to the
software.

-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

bwb@sei.cmu.edu (Bruce Benson) (08/23/90)

In article <19578@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>
>> testing a big system is *hard*
>
>It shouldn't be "hard", but it is a major cost factor.
>
>Some general comments on testing large systems:
<agreeable items removed>

>3.  Most phases of testing should be conducted by a separate test
>organization.
>This test organization should include some senior people; one of the most
>common mistakes is assuming that testing (and Q/A) is something that only
>requires a BS/CS and a couple of years of experience.

The article makes a good argument for a methodical planned test approach. But
then "cops out" by the age old refrain of needs "experienced senior people". 
This is also sometimes worded as "put your best people in activity X". 

What activity does not need experienced people?  Give me some compelling
reasons why one activity (such as test) should have more or less experienced
people then another activity (QA, design, coding, etc).

The entire software engineering process must work well.  Throwing
"experience" at part of the process is usually a symptom of not yet
having that part of the process under control.  We put our independent
testers out of business of catching errors by introducing a methodical test
method in DT&E (prior to that the programmers had the mind set
that someone else was responsible for showing the code worked).

Quality of the product (code working well) should remain the responsibility of
those who produce the code.  Inspecting and testing is clearly part of the
activity of the programming staff.  Independent checking activities should be 
used in a manner to verify the quality of the product (and process) and not as
a substitute for the checking (and feedback) activities of the programming 
staff. 

>6.  Despite all the emphasison testing, the single most effective
>technique for eliminating and preventing defects is "inspection".

Well, getting it right the first time works pretty well too.  Most code 
written is correct.  The trick is to increase this percentage.  Programmers
doing methodical reviews and testing of their code *learn* faster about
what their individual strengths and weaknesses are.  My reference above to
putting testers out of business refers to programmers being required to
"structurally" test their code (ie, exercise every line of code).  This
required the programmer to relook at just about every line of code.  This
simple combined inspection and test eliminated (yup - zero) code errors.
Independent test did the functional test, and we argued a lot over if the
function was implemented correctly, but the system never died, blew up, or
corrupted anything.  Your mileage may vary.

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

bwb@sei.cmu.edu (Bruce Benson) (08/23/90)

In article <8316@fy.sei.cmu.edu> bwb@sei.cmu.edu (Bruce Benson) writes:

>required the programmer to relook at just about every line of code.  This
>simple combined inspection and test eliminated (yup - zero) code errors.

Before I get flamed for this remark, let me clarify:  no errors detected in
independent test nor reported by the users.  This is in contrast to our
more normal situation of test finding a dozen or more system crashing 
errors and after "fixing" those, the users usually finding a handful of
their own.


* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

poer@titan.tsd.arlut.utexas.edu (Leslie Poer) (08/24/90)

To:
Subject: Re: Specification Tools and Code Testing
Newsgroups: comp.software-eng
In-Reply-To:
References: 
Organization: Applied Research Labs, University of Texas at Austin
Cc: 
Bcc: 

	I need some info on a company called Intermetrics, Inc.
	Thanks in advance.

	-- Leslie

poer@titan.tsd.arlut.utexas.edu (Leslie Poer) (08/24/90)

To:
Subject: Testing Conferences
Newsgroups: comp.software-eng
In-Reply-To:
References: 
Organization: Applied Research Labs, University of Texas at Austin
Cc: 
Bcc: 

	I need some info (like when, where, who's in charge etc) on
the next TAV Conference.  I think TAV stands for Testing, A(?), and
Verification.  It's supposed to be more theoretical than the Testing
Computer Software Conference.

	Thanks in advance,
	-- Leslie

jjacobs@well.sf.ca.us (Jeffrey Jacobs) (08/25/90)

In <8316@fy.sei.cmu.edu>, Bruce Benson writes:

> What activity does not need experienced people?  Give me some compelling
> reasons why one activity (such as test) should have more or less experienced
> people then another activity (QA, design, coding, etc).

All software engineering activities require experienced people.  However,
the general hiring requirements and patterns for testing are right at the
bottom; take a look at misc.jobs.offered or your local Sunday want ads and
see how many ads you can find for testing (or QA) requiring more than
two years of experience.

I agree that inspection and so-called "unit"/structural testing should be the
developer's responsibility.  However, this does not eliminate the
need for an independent testing organization.  a)  Programmers still
tend to be blind to their own mistakes.  b) Large systems involve
a level of complexity that cannot cost effectively tested by the
programmers, e.g. "integration testing", "systems testing", etc.  In
most cases, the programmers frequently don't even have a sufficiently
broad knowledge of the system to know where to start, nor can they
be expected to acquire such knowledge and still do any development work.

"Code working well" is only a part of building a large system.  I've seen
more than one project where all of individually tested piece "work well",
yet when put together, the whole thing falls apart.

The so-called "black team" approach still works quite well.

> Well, getting it right the first time works pretty well too.

No argument there!  But I happen to thing that the "first time" should
mean the first time it gets *executed* (or close to the first time).  This
means inspecting before testing.  And, for the individual programmmer,
it means *desk checking* the code before running their unit (or
structural) tests.  I also strongly advocate an independent review of the
code prior to execution.   In the ideal environment, these independent
reviews would be informal and an ingrained part of the "culture" of
the company.

It also means much more time spent in design and planning.

> Most code written is correct.  The trick is to increase this percentage.

Make up your mind! :-)  Frankly, I'm afraid my experience forces me to
disagree.  Most code written is not correct.  Even if the system doesn't
crash or corrupt data, if it isn't functionally correct, then the code
isn't correct.  But I've seen far too much code that did crash/corrupt
data/return bad results to come close to accepting the word "most".

> The entire software process must work well...

> Independent test did the functional test, and we argued a lot over if the
> function was implemented correctly...

This is indicative of a major flaw in the process!!!  If everybody
is arguing a lot over the functionality of the system then the requirements
and analysis were obviously done poorly!!!


Jeffrey M. Jacobs
ConsArt Systems Inc, Technology & Management Consulting
P.O. Box 3016, Manhattan Beach, CA 90266
voice: (213)376-3802, E-Mail: 76702.456@COMPUSERVE.COM

bwb@sei.cmu.edu (Bruce Benson) (08/26/90)

In article <20013@well.sf.ca.us> jjacobs@well.sf.ca.us (Jeffrey Jacobs) writes:
>
>I agree that inspection and so-called "unit"/structural testing should be the
>developer's responsibility.  However, this does not eliminate the
>need for an independent testing organization.  a)  Programmers still
>tend to be blind to their own mistakes.  b) Large systems involve

"blind to their own mistakes", applies to everyone, even testers who should
use a methodical method since "intuitive testing" doesn't work well.  My 
real point is that if the programmer (or anyone) is blind to their own 
mistakes (and everyone is, try writting a letter) then the "system" should
help overcome this (i.e., methodical methods, tools, *peer* reviews, etc) in
a way that helps one to improve.  Sounds corny, but we've used it and it worked
for us. Another way to phrase this is the primary purpose of finding errors is
to help the individual (programmer, tester, configuration tech, etc) not
repeat the error.  We knew the strengths and weakness of each of our
programmers and the type of errors they made (and what they did well!). This
change in perspective and reason for audits, reviews, and tests, is why I
feel we eliminated code errors detected by test and the ultimate user. 
 
>a level of complexity that cannot cost effectively tested by the
>programmers, e.g. "integration testing", "systems testing", etc.  In
>most cases, the programmers frequently don't even have a sufficiently
>broad knowledge of the system to know where to start, nor can they
>be expected to acquire such knowledge and still do any development work.
>
>"Code working well" is only a part of building a large system.  I've seen
>more than one project where all of individually tested piece "work well",
>yet when put together, the whole thing falls apart.

I admit to having background in maintenance of as opposed to 
development of large systems (well, 1/4 to 1/2 million lines of code -
you judge what size that is.  The military contracts for development, but
then maintains the code).  The only point I would make here is that getting
the written code to work well is a big step forward - this allows us to
concentrate on improving requirements, design and project management.  


<misc agreeable stuff deleted>
>No argument there!  But I happen to thing that the "first time" should
>mean the first time it gets *executed* (or close to the first time).  This

Absolutely!  We had a simple philosophy (or challenge) which was for the
work (again, maintenance) to work the first time it is executed.  We backed
this up with a well defined process to approach each maintenance activity.
I found that programmers got addicted to this approach and the results.

>means inspecting before testing.  And, for the individual programmmer,
>it means *desk checking* the code before running their unit (or
>structural) tests.  I also strongly advocate an independent review of the
>code prior to execution.   In the ideal environment, these independent
>reviews would be informal and an ingrained part of the "culture" of
>the company.

I have to disagree.  This whole process must be formal.  "Coding" is not
something that should be a discretionary activity.  It should be done
methodically with planned approaches (follow architecture of program,
code general solutions - not *fixes*, commments follow a pattern, etc)
planned reviews and testing.  Since our approach emphasized improving
the programmer and was not some arbitrary *text book* process (ever
see military regulations?), it was adopted with less resistance than
we expected (programmers helped define the process).

>> Most code written is correct.  The trick is to increase this percentage.
>
>Make up your mind! :-)  Frankly, I'm afraid my experience forces me to
>disagree.  Most code written is not correct.  Even if the system doesn't
>crash or corrupt data, if it isn't functionally correct, then the code
>isn't correct.  But I've seen far too much code that did crash/corrupt
>data/return bad results to come close to accepting the word "most".

We looked at the code and analyzed the errors caught.  In our work the vast
(how do you like that for fuzzy) amount of code was correct.  This I believe
explains why we so quickly achieved and sustained "zero defects" (code errors
detected by test and users).

>> Independent test did the functional test, and we argued a lot over if the
>> function was implemented correctly...
>
>This is indicative of a major flaw in the process!!!  If everybody
>is arguing a lot over the functionality of the system then the requirements
>and analysis were obviously done poorly!!!

Yup, requirements part of the process stunk.  For that matter, independent 
test was a "window".  If we turned buggy code over to test, then buggy code 
eventually went to the users.  We realized that our test function could only 
give us an idea of how well we did.  Can't fix everything at once!

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

mjl@praguecs.rit.edu (Lutz Mike J;;;777777) (08/27/90)

In article <8353@fy.sei.cmu.edu>, bwb@sei.cmu.edu (Bruce Benson) writes:
|>"blind to their own mistakes", applies to everyone, even testers who should
|>use a methodical method since "intuitive testing" doesn't work well.  My 
        ^^^^^^^^^^^^^^^^^

Has the word ``method'' become so meaningless that we have to prepend the
adjectival form simply to provide semantic weight?  Or is useless redundancy
going to become the rule?  A bit off the thread, perhaps, but then again maybe
not.  After all, specification is about clarity if it's about anything
at all.

---------
Mike Lutz
Rochester Institute of Technology
Rochester, NY 14623-0887
mjl@cs.rit.edu

mcgregor@hemlock.Atherton.COM (Scott McGregor) (08/28/90)

In article <20013@well.sf.ca.us>, jjacobs@well.sf.ca.us (Jeffrey Jacobs)
writes:
> In <8316@fy.sei.cmu.edu>, Bruce Benson writes:
> 
> > What activity does not need experienced people?  Give me some compelling
> > reasons why one activity (such as test) should have more or less
experienced
> > people then another activity (QA, design, coding, etc).
> 
> All software engineering activities require experienced people.  However,
> the general hiring requirements and patterns for testing are right at the
> bottom; take a look at misc.jobs.offered or your local Sunday want ads and
> see how many ads you can find for testing (or QA) requiring more than
> two years of experience.
> 
The point when you need experienced people is when you are trying to 
change the way you do things, or start out on new work. Less experienced
people sometimes hesitate to test the limits of their knowledge unless
they know that there is a recognized expert around who can help them
get out if they find themselves hopeless wedged in a horrible situation.
Typically these situations don't happen and the less experienced do just
fine without consulting the experienced people at all.  They find many
new ways of doing things that make great contributions.  But I have 
noted on several occassions reluctance of otherwise good people to
try working in an unknown area unless they know of a nearby expert to
contact.  Once an expert was hired EVERYONE's productivity went up
even if no one liked talking with the expert.  

I therefore presume that the implication of the orignal poster's comment was
that experienced people would be needed to support the substantial changes
to the testing job.

In general, companies have their own specific value systems, and I have
found many companies who seem to have a class system that values R&D engineers
most highly, and test engineers, and then support engineers lower.  This
is often seen in years of experience and in salaries.   I believe that
it is the fact that without the original product there would be nothing
to test or support, so R&D seems more important to the core business
(you could always ship it with poor quality, or poor support, but you can't
ship it at all if it hasn't been invented yet).  These trends can be seen
as suggested Jacobs by looking at want ads.  However, many organizations
that are hiring are looking to support existing activities, not start
totally new ones.  In such cases experienced people are less critical, since
the existing staff can train the newcomer.  But when big change is
planned, high paid, experienced experts may be sought out, even for
Test engineer and support engineering tasks.   However, local experts
may be well known through professional societies, etc. and may be contacted
directly to see if they are interested, rather than posting a want ad.
So lack of requirements for experience in want ads can be deceiving. 

Scott McGregor

mcgregor@hemlock.Atherton.COM (Scott McGregor) (08/29/90)

>    I disagree.  In many cases experienced people have become entrenched
>    in doing things the way they have been successful.  This tends to make
>    them resistant to change, regardless of its potential merits.
>    Inexperienced (but not necessarily ignorant) people are often
>    more reactive to change because they don't recognize it as such.

I don't think that this is a disagreement.  First, when I said that 
I have obsevered that inexperienced people sometimes hesitate, there is
nothing to disagree with.  I have observed this. Maybe others have not
observed this, but I have. On many occassions. Inexperienced people don't
always hesitate, and not all of them do. But I have observed that many often 
hesitate when there are no experienced gurus to go to. 

Second, I didn't say that the experienced people spearhead the change, only
that they catalyze it.  I have observed as you have that the experienced
people ones are often more entreched in what they already know.  Some of
the less experienced people want to try something different to "make their
mark".  But they often don't try the truly outlandish breakthrough type
approaches unless they think that a safety net (in the guise of an
expert) is nearby.  What I find interesting is how few times the safety
nets are tried, given the hesitation before they were installed.

One danger though is that the outlandish ideas of the inexperienced people
must be carefully tended, and defended, because sometimes inexperienced
people will under recognize their own contributions and over-recognize
those of the experienced  people.

My reason in bringing this up is merely to point out important human
dynamics issues that affect productivity that are often forgotten when
people are described as interchangable resources (as usually is the
case in want ads).

>    AAAAAAAAAAAAAAAHHHHHHHHHHHHHHH!!!!!!!!!!!!  Run for your lives!!!!
>    I guess you could ship it with poor quality or poor support but 
>    you know what...  Your net return will be worse than if you hadn't
>    shipped it at all.

Many people won't agree.   At a billion dollar company, you often have
enough predictability in sales and enough cash to float out a few R&D
efforts a few months more without killing the company.  For single product
companies running out of venture capital, they may see themselves as just
a few steps ahead of the grim reaper.  They often figure if they can get
the revenue today, they MIGHT be able to fix things tomorrow.  But if
they wait until tomorrow they will CERTAINLY be dead first!  I cannot
comment on
the correctness or incorrectness of this view.  But correct or incorrect,
many people hold it sometimes, and it causes the sorts of value systems
described above that have real impact on people's lives.
Excellent companies usually get past this problem sometime, but
many companies aren't there yet, and peoples livelihoods hang in the
balance while they learn by darwinian selection. 

Scott McGregor  
Atherton Technology

bwb@sei.cmu.edu (Bruce Benson) (08/30/90)

References: <20013@well.sf.ca.us> <1990Aug13.140347.9441@nixtdc.uucp> <19578@well.sf.ca.us> <8316@fy.sei.cmu.edu> <29390@athertn.Atherton.COM> <141454@sun.Eng.Sun.COM>
Reply-To: bwb@sei.cmu.edu (Bruce Benson)
Distribution: usa
Organization: Software Engineering Institute, Pittsburgh, PA

In article <141454@sun.Eng.Sun.COM> donm@margot.Eng.Sun.COM (Don Miller) writes:
>In article <29390@athertn.Atherton.COM> mcgregor@hemlock.Atherton.COM (Scott McGregor) writes:
>>The point when you need experienced people is when you are trying to 
>>change the way you do things, or start out on new work. Less experienced
>>people sometimes hesitate to test the limits of their knowledge unless
>>they know that there is a recognized expert around who can help them
>>get out if they find themselves hopeless wedged in a horrible situation.
>
>   I disagree.  In many cases experienced people have become entrenched
>   in doing things the way they have been successful.  This tends to make
>   them resistant to change, regardless of its potential merits.
>   Inexperienced (but not necessarily ignorant) people are often
>   more reactive to change because they don't recognize it as such.
>   Both sides of this argument rely on generalizations, though.  

I just finished being part of a team that did a software "self assessment"
on an organization.  It was interesting to note the differences between
problem perception in the "old timers" (about 10 years on the job) and
newer folks (1-3 years).  When asked to talked about an actual problem
in the organization the "old timers" would always further explain the
problem in great detail (much more detail than we had collected), and
then say they were aware of it.  The feeling I got was: "yeah, I know
a lot about that problem, so what?"  When the newcomers were asked,
they launched into the problem (in less glorious detail) and talked about
possible solutions.

What stood out was the "old-timers" (usually middle managers) seemed so
comfortable with the problems (like an old buddy).  It was also fairly
obvious they considered the newcomer's perception of these problems as
"personal problems" of the newcomers (whose bosses were these middle
managers).

Not enough data points to draw any real conclusions, but the consistency
during the interviews was remarkable.


* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

rcd@ico.isc.com (Dick Dunn) (08/30/90)

bwb@sei.cmu.edu (Bruce Benson) writes:

>...I just finished being part of a team that did a software "self assessment"
> on an organization.  It was interesting to note the differences between
> problem perception in the "old timers" (about 10 years on the job) and
> newer folks (1-3 years)...

10 years isn't an "old timer"!  That is, not unless you mean 10 years with
the particular group or company, in which case it's quite a while--and easy
to see how they'd become hide-bound, not by their experience _per_se_ but
by being in one place too long.  Clarification?

> What stood out was the "old-timers" (usually middle managers) seemed so
> comfortable with the problems (like an old buddy)...

Hold on, here...it's really not reasonable to compare the attitudes of
middle managers--who are now twice-removed from doing software--with the
folks who are actually doing it.  The problems easily remain familiar when
you've been through them a lot, yet they're non-threatening when you get a
little distance from the immediacy of them.  Did you have any 10-year vets
in non-management positions?
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...I'm not cynical - just experienced.

donm@margot.Eng.Sun.COM (Don Miller) (08/31/90)

In article <29541@athertn.Atherton.COM> mcgregor@hemlock.Atherton.COM (Scott McGregor) writes:
  I said:
>>    I disagree.  In many cases experienced people have become entrenched
>>    in doing things the way they have been successful.  This tends to make
>>    them resistant to change, regardless of its potential merits.
>>    Inexperienced (but not necessarily ignorant) people are often
>>    more reactive to change because they don't recognize it as such.
>
>Second, I didn't say that the experienced people spearhead the change, only
>that they catalyze it.  I have observed as you have that the experienced
>people ones are often more entreched in what they already know.  

   Good, I think the thesis that we can agree on is something like this:
   "Inexperienced people often initiate organizational change.  However,
    experienced people are largely responsible for operationalizing the
    change."
   How's that?
   
>>> Scott writes that you can ship a product without quality or support
>>> but can't ship it at all if it hasn't been invented
>
>>    AAAAAAAAAAAAAAAHHHHHHHHHHHHHHH!!!!!!!!!!!!  Run for your lives!!!!
>>    I guess you could ship it with poor quality or poor support but 
>>    you know what...  Your net return will be worse than if you hadn't
>>    shipped it at all.
>
>Many people won't agree.   At a billion dollar company, you often have
>enough predictability in sales and enough cash to float out a few R&D
>efforts a few months more without killing the company.  For single product
>companies running out of venture capital, they may see themselves as just
>a few steps ahead of the grim reaper.  They often figure if they can get
>the revenue today, they MIGHT be able to fix things tomorrow.  But if
>they wait until tomorrow they will CERTAINLY be dead first!  I cannot
>comment on
>the correctness or incorrectness of this view.  But correct or incorrect,
>many people hold it sometimes, and it causes the sorts of value systems
>described above that have real impact on people's lives.
>Excellent companies usually get past this problem sometime, but
>many companies aren't there yet, and peoples livelihoods hang in the
>balance while they learn by darwinian selection. 

   I'll have to agree with this as well.  When in survival mode
   a company must sometimes make decisions which fly in the face
   of long-term success.  Sacrificing quality and support is
   such a decision.  Hopefully, over the course of time, Darwinian
   selection will result in all companies, no matter how small,
   realizing the value of quality and service.
>
>Scott McGregor  
>Atherton Technology

Don Miller
Software Quality Engineering
Sun Microsystems
donm@margot.sun.com

mcgregor@hemlock.Atherton.COM (Scott McGregor) (09/05/90)

In article <141583@sun.Eng.Sun.COM>, donm@margot.Eng.Sun.COM (Don
Miller) writes:

> >Second, I didn't say that the experienced people spearhead the change, only
> >that they catalyze it.  I have observed as you have that the experienced
> >people ones are often more entreched in what they already know.  
> 
>    Good, I think the thesis that we can agree on is something like this:
>    "Inexperienced people often initiate organizational change.  However,
>     experienced people are largely responsible for operationalizing the
>     change."
>    How's that?
>    

My observation is more along the lines of "if you want inexperienced
people to initiate organizational change, make sure you have enough
experienced talent on hand that the inexperienced people won't be
afraid of trying something where they need might help later."  A mix of
experience is beneficial because experienced people without
inexperienced people
can become entrenched, and inexperienced people without experienced
people can feel frustrated, hesitant and fearful of getting in trouble. 
Managing such a combination calls for making sure that creative new ideas from
the inexperienced are giving a fair hearing and are not censored
by the opinions of the more experienced nor self-censored by hesitant less
experienced contributors. 

I think that  is somewhat different than the statement about making 
changes operational, which has to do with knowing how to get things done in
an organization.  My comment is about trying new things in the presence of
a (human) safety net--more people are willing to try it if the
net is nearby than if not.  As I noted, these nets often go untested,
but work often takes longer until they are available

The comment above about how experienced people
know how to make things become operational may be true too,
though they can also have come to decide that such changes can't
be made and don't bother to try.  There do seem to be individuals
who play the role of "change agent" (often unofficially) and it is
important to identify them and work with them to get changes accepted.

Scott McGregor
Atherton Technology