[comp.software-eng] separate SW testing newsgroup

michaelo@jove.cs.pdx.edu (Mickael J. O'Hair) (12/14/90)

In a word (or two): don't.  Testing is an inherent part of software engineering
and should be dicussed in the mainstream newsgroup.  Starting a separate group
would, in my opinion, serve to widen the gap between design and testing.  Over
eight years in the industry has given me concrete evidence that the "design is
creative and manufacture/testing/QA is drudge work" mentality still exists. If
we want to truly think of ourselves as engineers, then we had best consider
testing as an integeral part of the software engineering process.  Any and all
discussions of QA and testing should be held in this newsgroup.

Sorry about the evangelical tone, but I've been at three companies where the
design/QA gulf was in place and all are no longer in business.

			    -mjo-

rh@smds.UUCP (Richard Harter) (12/15/90)

In article <916@pdxgate.UUCP>, michaelo@jove.cs.pdx.edu (Mickael J. O'Hair) writes:
> 	...Over
> eight years in the industry has given me concrete evidence that the "design is
> creative and manufacture/testing/QA is drudge work" mentality still exists.

Well my humble view is that testing is drudge work.  That's why you have
machines do it.  People are supposed to design and implement the procedures
that let the machines do it.  Or as that noted authority on software
development (me) said:

	"Any procedure whose purpose is to improve software quality
	 which is not automated is a bug waiting to happen."
-- 
Richard Harter, Software Maintenance and Development Systems, Inc.
Net address: jjmhome!smds!rh Phone: 508-369-7398 
US Mail: SMDS Inc., PO Box 555, Concord MA 01742
This sentence no verb.  This sentence short.  This signature done.

jeremy@epochsys.UUCP (Jeremy L. Mordkoff) (12/17/90)

I'm fairly new to the network, so just ignore me if I'm way off base.

It seems to me that the only reason to spawn a new newsgroup would be
if there were a number (majority?) of people who would read one and not
the other. From what I read, everyone who would read comp.software-test
would also read comp.softare-eng, and I suspect that the opposite is
also true. So why don't we (the SQA guys) just wait for the eng guys to
start complaining about our using up 'their' bandwidth...

Secondly, as a career SQA engineer, I wonder about the motives for
splitting off SQA from eng (read development). The literature is clear
in its assertion that a sucessful SW shop must integrate these two
functions. I have spent a lot of effort trying to get development to
let SQA in on its process, and to split the newsgroups apart would seem
to be a step in favor of the old school.

And as a third thought, if we do split this group, we will also have to
change its name. You will never convince me that software-eng does not
include SQA. So we will have to change it to software-dev and add
software-qa. While were at it, how about software-support,
software-manufacturing, and software-maintainence...

Jeremy
-- 
Jeremy L. Mordkoff		uunet!epochsys!jeremy
Epoch Systems, Inc.		{harvard!cfisun,linus!alliant}!palladium!jeremy
8 Technology Drive 		(508)836-4711 x346 (voice)
Westboro, Ma. 01581		(508)836-4884 (fax)	

bwb@sei.cmu.edu (Bruce Benson) (12/18/90)

In article <278@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:

>Well my humble view is that testing is drudge work.  That's why you have
>machines do it.  People are supposed to design and implement the procedures
>that let the machines do it.  Or as that noted authority on software
>development (me) said:
>
>	"Any procedure whose purpose is to improve software quality
>	 which is not automated is a bug waiting to happen."

Focusing on the current thinking on quality software, which is to
build the quality in up front - in the thinking stage - not in the testing
"automatable" stage, then my corollary to Richard's law is:

         "Any procedure that can be completely automated does not
          have any significant influence on software quality."

The key assertion is that by the time we understand a procedure well enough
to completely automate it, then the most significant benefits dervied
from that procedure have already been realized in the manual form and
automating it will cause inconsequential improvements in quality (over
the manual form).  

Automation is still desirable to preclude special causes of variation in 
quality (Richard's "bug waiting to happen"), but the real boost in quality
(reducing common causes of variation) came from the initial idea behind the
procedure.  

This corollary does not preclude the fact that automation may
dramatically increase productivity even if it doesn't increase quality.
Unfortunately, we've automated parts of some procedures (such as testing)
and this partially automated procedure becomes the whole procedure ("but it
PASSED the test suite!").  

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force

shimeall@taurus.cs.nps.navy.mil (timothy shimeall) (12/18/90)

In article <278@smds.UUCP> rh@smds.UUCP (Richard Harter) writes:
>Well my humble view is that testing is drudge work.  That's why you have
>machines do it.  People are supposed to design and implement the procedures
>that let the machines do it.

A worthy goal, but there's one small fly in that soup.  The problem of
matching a program to its specification is, in the general case, provably 
reducable to the halting problem.  In short, you can't produce an
totally automated software testing system to test software in
general.  It may be possible (but VERY difficult) to create software
that tests specific applications automatically -- but that's going to
be FAR harder than creating any piece of software to fulfill that
application (consider that the testing system must be able to handle
all of the functions of the application system, plus be able to
recoginize all of the possible failure modes in the application system
[the latter being a VERY large set for real-world software]).  Given
that you don't want to do "drudge work" in the first place, are you
willing to give over several years of your life to automating that 
very "drudge work" (and testing your automation)?  And willing to repeat
that effort every time you change applications?

Indeed, there are a lot of philosophical reasons why you may not WANT
testing automated.  Consider: testing's purpose is to assure the
quality of the software.  Don't you want that assurance based on the
judgement of a responsible individual, rather than based on individual
judgement encoded in a piece of software?  Consider that the software
author in the latter case may be totally uninterested in YOUR project,
but only in selling his/her testing tool...
				Tim

-- 
Tim Shimeall ((408) 646-2509)

daves@hpopd.pwd.hp.com (Dave Straker) (12/19/90)

Well my humble view is that testing is drudge work.  That's why you have
----------
Design is creative. Designing tests is doubly creative. Coding is
drudge work. Executing test is double drudge. Usually it isn't as clear
cut as this: the coder has a certain amount of leeway, especially
if he is the designer too. Similarly, executing tests can be very
interesting if you designed the tests which means you understand
the design of the code. Breaking other people's code is fun!

There is still a problem: separate the coders from the testers, and
you get an us'n'them game of tennis, which (as the basenote writer
points out) can contribute to the death of companies.

A solution: Break your friend's code. Test within the development
team, where the engineer is capable of all roles, and executes them
too. The change is as good as a rest. An example:

    Tim                       Jim
    -----------               ------------
    Designs X                 Designs Y
    Designs tests for Y       Designs tests for X
    Codes X                   Codes Y
    Tests Y                   Tests X

This is a simplistic example (I haven't described inspections, etc.),
but I'm sure you get the drift.

Dave Straker            Pinewood Information Systems Division (PWD not PISD)
[8-{)                   HPDESK: David Straker/HP1600/01
                        Unix:   daves@hpopd.opd.hp.com

kambic@iccgcc.decnet.ab.com (12/21/90)

In regards to the question of a separate newsgroup for testing, this 
would be a counterproductive move.  It is actually impossible to 
separate testing activity from development activity.  Consider putting 
software development within the framework of the scientific method.
Any hypothesis (spec, design, code, subsystem, system) is liable to 
verification before it becomes accepted as usable.  Up until then it 
is only theoretically valid.  Whether this verification comes from 
inspections, paper execution, design analyzers, testing, or actual 
use in the application (all subject to their own problems), is not 
the issue.  Any one of these items is a proposed solution until 
verified by experiment.  The fact that it works in the real world 
is important, not that it is proposed to do something.  

So I don't think you can separate the two, nor do I think that you 
want to.  Developers and testers are two facets of the same stone, 
with different but complementary viewpoints. 

GXK
standard disclaimer

marick@cs.uiuc.edu (Brian Marick) (12/21/90)

bwb@sei.cmu.edu (Bruce Benson) writes:


>Focusing on the current thinking on quality software, which is to
>build the quality in up front - in the thinking stage - not in the testing
>"automatable" stage, then my corollary to Richard's law is:

>         "Any procedure that can be completely automated does not
>          have any significant influence on software quality."

This is demonstrably false.  Consider compilers.  Consider assemblers.
I'm all for following Deming ("There is too much talk of the need for
new machinery and automation - most people have not learned to use
what they have"), but let's not go overboard.  There are
quality-improving procedures that are impractical when performed
manually.  This is especially true for quality measurement and
evaluation procedures, which are essential: without them, you won't
know what to think about next time you're in the "thinking stage".

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick

bwb@sei.cmu.edu (Bruce Benson) (12/22/90)

In article <marick.661788678@m.cs.uiuc.edu> marick@cs.uiuc.edu (Brian Marick) writes:
>bwb@sei.cmu.edu (Bruce Benson) writes:
>
>
>>Focusing on the current thinking on quality software, which is to
>>build the quality in up front - in the thinking stage - not in the testing
>>"automatable" stage, then my corollary to Richard's law is:
>
>>         "Any procedure that can be completely automated does not
>>          have any significant influence on software quality."
>
>This is demonstrably false.  Consider compilers.  Consider assemblers.
>I'm all for following Deming ("There is too much talk of the need for
>new machinery and automation - most people have not learned to use
>what they have"), but let's not go overboard.  There are
>quality-improving procedures that are impractical when performed
>manually.  This is especially true for quality measurement and
>evaluation procedures, which are essential: without them, you won't
>know what to think about next time you're in the "thinking stage".

Good point. But I'll argue that this is where productivity vs quality gets 
obscured because of the vast increase in *productivity* brought about by these
tools. High productivity allows one to concentrate on quality issues, but
they don't increase quality.  Instead, they just ensure that the expected 
level of quality is maintained even at the very high level of productivity. 
Let me try to illustrate my argument:  

At one time parsing general algebraic expressions was a difficult and error
prone software programming effort.  Now it is almost trivial.  Because we
can use a tool (say Lex/Yacc) to produce a perfect algebraic parser every
time, does this tool improve quality?  I don't think so.  Instead, it
eliminates mistakes that would prevent me from attaining the *expected*
quality.  

Compilers and assemblers do the same thing for programmers. My
"while loop" always works correctly (at the mechanical level ;-)) because
the compiler implements it correctly.  The expected level of quality is
attained (while loop works), and I can do lots of them without worrying
if they will work right.  

I've found this distinction useful when evaluating tools and what they can
really provide to a software effort.  The most useful tool is usually the
one that allows me to "lock in" my quality (the things we know how to do)
and do it faster.  The more speculative tool is the one that is going to
allow me to do something I have not done before.  Since we have not done
it, it is also hard to tell if the tool is really any good.

* Bruce Benson                   + Internet  - bwb@sei.cmu.edu +       +
* Software Engineering Institute + Compuserv - 76226,3407      +    >--|>
* Carnegie Mellon University     + Voice     - 412 268 8469    +       +
* Pittsburgh PA 15213-3890       +                             +  US Air Force