[comp.software-eng] Software Quality Assurance

pjs269@olympus.uucp (Paul Schmidt) (06/13/90)

Many texts say that SQA should have a seperate reporting structure to
senior management than design.  I THINK THIS IS WRONG!  Quality is dependent
on the customers perception and is embedded in the design and production of
the software and thus the SQA function should be as close to the product as
possible.  Therefore it should be reporting to the same management as design!

Any comments?

lfd@cbnewsm.att.com (leland.f.derbenwick) (06/14/90)

In article <1990Jun12.191025.29951@olympus.uucp>, pjs269@olympus.uucp (Paul Schmidt) writes:
> Many texts say that SQA should have a seperate reporting structure to
> senior management than design.  I THINK THIS IS WRONG!  Quality is dependent
> on the customers perception and is embedded in the design and production of
> the software and thus the SQA function should be as close to the product as
> possible.  Therefore it should be reporting to the same management as design!
> 
> Any comments?  [Well, you asked! -- lfd]

In an ideal world, SQA belongs in the design organization.  But
in reality, that allows the design management to weaken SQA: "You
certify this program _now_, or we'll miss schedule and it'll
reflect on your merit review."  Well, not usually that blatant,
but it can get pretty close...

The most effective situation I've experienced (context: about 10
each of hardware and software developers, 2 to 3 year project)
was with a totally separate QA organization, but one QA person
assigned full-time to the project, who attended all reviews, etc.
His office was also in the midst of ours.

So the QA _function_ was tightly coupled to the project, but the
reporting structure was not.  We got immediate feedback from him,
but if he saw something he didn't like, he could speak out without
fear of reprisals.

 -- Speaking strictly for myself,
 --   Lee Derbenwick, AT&T Bell Laboratories, Warren, NJ
 --   lfd@cbnewsm.ATT.COM  or  <wherever>!att!cbnewsm!lfd

ejp@icd.ab.com (Ed Prochak) (06/14/90)

In article <1990Jun12.191025.29951@olympus.uucp>, pjs269@olympus.uucp
(Paul Schmidt) writes:
> Many texts say that SQA should have a seperate reporting structure to
> senior management than design.  I THINK THIS IS WRONG!  Quality is dependent
> on the customers perception and is embedded in the design and production of
> the software and thus the SQA function should be as close to the product as
> possible.  Therefore it should be reporting to the same management as design!
> 
> Any comments?

Basically, the arguement goes that:

 with the reporting structure as

                        Mgr
                         |
                    ------------
                    |           |
                   Dev          QA
                  team         team

and if the development is being delayed because the QA team says
the product doesn't meet the requirements, then the manager
(under pressure from his boss to get the new product out ON-TIME)
MAY pressure the QA team (threats of poor reviews, reassignment to
crummy jobs, whatever).


So the idea is the structure:

                 Dev                   QA
                 Mgr                  Mgr
                  |                    |
                  |                    |
                 Dev                   QA
                team                  team

and if the QA team is saying the product doesn't meet requirements,
they are shielded from the development manager's wrath. Now this does
push resolution of the problem up another level (the boss of the two
magagers must decide) but the benefits are:
*QA team protected from pressure of development engineering's product
 delivery schedule. (but they may have their own schedule to meet.)
*Development team must recognize that development is not complete just
 because the delivery date has come. Bad product will come back from QA.
*Development Manager protected from schedule slips because the slips
 are approved by the manager's boss.
*QA Manager protected from allowing shoddy product shipments because
 the shipments are approved by the manager's boss.

Granted these are the ideal results.

I agree with your point that quality is embedded in the product from
the concept stage on up. The QA team should be involved in the
definition of the product, along with development engineering, marketing,
manufacturing engineering, and maybe some others. Having QA report to
the development manager is not necessarily the way to do this,
any more than marketing and manufacturing should report to the
development manager.

The bottom line is that everyone must be involved in getting a quality
product out the door. (read "everyone" as "I")

(Pardon the inconvenience during our remodelling of the signature file)
Edward J. Prochak        (216)646-4663         I think. 
{cwjcc,pyramid,decvax,uunet}!ejp@icd.ab.com    I think I am.
Allen-Bradley Industrial Computer Div.       Therefore, I AM!
Highland Heights,OH 44143                      I think?  --- Moody Blues

marick@m.cs.uiuc.edu (06/15/90)

I've narrowed the question down to "Who does testing?".  Because
everyone seems to have a different idea of what SQA does, this may not
speak to the original question.  If so, sorry.
==========
The two extremes are that developers do all their own testing and that
all testing is contracted out to a completely independent
organization.

Although people do make hard-and-fast rules, like "testing should be
separate from development", designing a testing strategy and
organization is just like designing anything else -- it involves
making tradeoffs between conflicting goals.  You have to figure out
your goals, figure out the tradeoff, and make a choice that will work
for you. And then, when you find out it didn't work, you figure out
why and try to improve it next time.

For example, here's an idealized picture of a tradeoff between
developer testing and independent testing.  The developer curve is
marked with an = and independent testing is marked with a %.  Developer
testing finds more bugs earlier, while the independent tester is still
getting up to speed.  In the long run, though, the independent tester
wins out, because the developer will miss bugs caused by his own
misunderstandings.


 |      				       %
 |
 |				       %
 |
 |				  %
B|		       	           =   =        =
U|		   =	   =
G|	      =		      %
S|	   =
 |	 =		  %
F|     =
O|    =		      %
U|   =	       	  %
N|  =  	      %
D| =% % % %
 --------------D1---------------------------D2-----
                   TIME


So, which do you pick?  Well, if you have an absolute deadline at D1,
you should pick developer testing.  At point D2, independent testing
will yield the best product.  However, there's an important question
that the graph doesn't capture -- what *kind* of bugs are found by the
different people?  Does the developer discover a few deep important
bugs, while the independent tester discovers a lot of low priority
bugs?  It often seems that way.

Much of this is because independent testers have the software "thrown
over the fence", and they don't have the time or opportunity to learn
it well enough to test it well.  And, sad to say, a lot of the people
in testing aren't very good.  They're either employees not good enough
for development or freshouts learning the ropes in their first
assignment (and ready to jump ship to development the first chance
they get).  And you want good tests in this situation? -- especially
given that effective testing is highly heuristic, meaning it's based
on experience and skill as a programmer and designer.

In such a situation, you're often better having the developer test his
or her own code.  It's a myth that developers can't test.  They might
not be able to do as well as the hypothetical equally competent
independent tester, but most of them can do quite well.  They don't
want to, so they do a half-hearted job.  And they don't know how, so
their half-hearted job is a bad one.  And they don't have decent
tools, or a system designed to be testable, so they rebel against the
scutwork required.  (Freshouts can't rebel, you see, so they get stuck
with the work.)

All things being equal, developer testing would be best, because it's
cheapest.  But, in practice, developer testing often means bad
testing.  What's reasonable compromise?  A moderately independent,
semi-rotating testing organization that "loans" people to projects
seems to work well.  What does *that* mean?

1.  Moderately independent means that a project manager can't swipe
testers to fill development holes without a fight with an equal.
Testers who work for you are too tempting (and too easily overruled).

2.  Developers rotate in and out of testing.  They learn to test.
Because developers know testing, and testers know development, the
quality of both development and testing rises.

3.  But there has to a be a semi-permanent core to the testing group.
They're the experts.  They do the training.  They build or buy tools.
They improve the process.

4.  "Loaning" means that the testers act as full members of the
projects they're testing, except that they also report to the testing
manager.  This helps avoid the us-vs-them tribal battles that are
endemic with separate testing organizations.

5.  Loaning also means that testers enter the project early.  They
participate in design at all levels.  Their job is to act as the
Devil's Advocate -- that person who persistently and annoyingly asks,
"What could go wrong here?  Why won't this work?".  (In a mature
development+testing organization, I would expect this to be the
tester's most important role.)  If no tester is available, someone
should still explicitly have this role, as it will flush out nasty
errors.

6.  Since they're loaned early, testers can write the test cases
early.  There's no good reason not to have test cases for an interface
when the interface is finished.  You *will* discover design errors
that can be corrected early -- why wait until a bunch of code has to
be thrown away?

7.  Since the testers will be infected with developer
misunderstandings, an independent testing team that makes a reasonably
quick pretend-I'm-a-user test run over the system is a useful backup.

There are certainly situations where this organization is not
appropriate.  As always, the way you do things is very dependent on
what you're trying to do.  What are your quality goals?  What fits
into your culture?  What are your organization-development goals?
What are your time/resource constraints?

One thing: when starting up such a testing organization, it helps to
have a strong-willed manager who's widely respected, and also
technically strong team members (who "smell right" to developers) who
nevertheless have good social skills and common sense.

Brian Marick
Motorola @ University of Illinois
marick@cs.uiuc.edu, uiucdcs!marick