[comp.windows.ms] software testing

jyen@Apple.COM (John Yen) (04/04/91)

Rick Allard writes:
> >To achieve this, the engineer has to...
> >(3)
> >test the program with real users, being as open as humanly possible to
> >complaints and suggestions, and (4) revise and test in a seemingly
> >endless cycle until the interface and functionality are "right."
> In my  day-to-day use, especially of Microsoft stuff -- only
> partially accounting for scale, I cannot believe houses do nearly
> enough of this.  Yes, software is complicated, but it doesn't
> vary statistically like an auto nor is it difficult to test.

'- nor is it difficult to test'?!  Have you ever tested non-trivial commercial
software?  Think about all the equivalence classes of input and output,
matrixed with possible hardware configurations, matrixed with interactions
with other software, matrixed with different uses and users of the software.
That's just off the top of my head.

User testing is hugely important, no question.  It is IMHO not necessarily
the hardest part of testing a given software product.  It is nonetheless IMHO
difficult to do well.  There should be a full range of users across several
hardware/software configurations in varying environments, with engineers
watching, listening, but saying NOT 1 DAMN THING -- this is an experiment;
interaction destroys validity.

This is neither a defense of user-hostile software nor an attack on Microsoft;
this is a statement of incredulity that at least some users seem to believe
good software is crunched out like so many cookies.  This will be posted to
comp.software-eng just to remind the people there of how little credit is given
to good design and quality engineering.

After so many years of godawful software, I can't _believe_ some people think
like this...

John Yen  jyen@apple.com  7.0 kernel test team
Disclaimer: I speak only for myself, #include <disclaimer.h>, etc blah...

bcw@rti.rti.org (Bruce Wright) (04/05/91)

In article <12911@goofy.Apple.COM>, jyen@Apple.COM (John Yen) writes:
> Rick Allard writes:
> > Yes, software is complicated, but it doesn't
> > vary statistically like an auto nor is it difficult to test.
> 
> '- nor is it difficult to test'?!  Have you ever tested non-trivial commercial
> software?  Think about all the equivalence classes of input and output,
> matrixed with possible hardware configurations, matrixed with interactions
> with other software, matrixed with different uses and users of the software.
> That's just off the top of my head.

I'd second the comment that software testing is a difficult subject.
I suspect that the original author meant that each individual test
is not too difficult (possibly true with a lot of software).  But as
you say you have to run lots of tests;  it's simply impossible to run
every possible input under every possible hardware configuration for
software of any useful degree of complexity.  The problem is trying
to choose which tests would be the most useful and performing them so
as to get the maximum amount of information from them - and this
selection process is hard.

There are, of course, a few celebrated bugs that one wonders "how can
that have ever made it out the door?"  And sometimes it is obvious
that an option was never really tested.  But there can be surprising
hardware differences that can foul up something that appeared to work
just fine on another piece of hardware - maybe trash in a memory
location, maybe a different CPU model handled exceptions differently,
or a video card that behaves in a nonstandard fashion, etc.  If you've 
never developed software used by a lot of sites, you may not realize 
how UNhomogeneous the computing environments are.

> There should be a full range of users across several
> hardware/software configurations in varying environments, with engineers
> watching, listening, but saying NOT 1 DAMN THING -- this is an experiment;
> interaction destroys validity.

Possibly a step like this is required for some types of software - I'm
not at all sure that it's a general requirement.  I _do_ think you need
to have a dialogue between the engineers and the users - in my experience
many users can't verbalize exactly what they want but "know it when they
see it".  Just watching user interaction won't necessarily get you there,
and may not lead to the sorts of brainstorming that you can get when the
user and the engineer can interact more closely.  The sort of experiment
you describe can be useful for quality control or to see if there are 
things that are "left out", at least if the software is highly interactive.

Customer testing ("beta testing") can also uncover unforseen problems,
and it is rarely practical to have an engineer watch every individual
doing beta testing ...

> this is a statement of incredulity that at least some users seem to believe
> good software is crunched out like so many cookies.  

A not uncommon belief I'm afraid ...

							Bruce C. Wright

pj@pnet51.orb.mn.org (Paul Jacoby) (04/07/91)

John Yen (jyen@apple.com  7.0 kernel test team) writes:
 > '- nor is it difficult to test'?! Have you ever tested non-trivial
 > commercial software?

After working Tech Support for a 'major mainframe/Unix company' for 2 1/2
years, I can tell anyone who wants to ask that software testing is (1) a pain,
(2) extremely difficult to implement properly without LOTS of people and
resources, and (3) even more difficult to do when management could give a
rat's-ass about quality.

A proprietary database product of said 'company' never ceased to amaze us, as
it would crash, destroy data, and generally just not work properly in some
spectacularly simple configurations.

Of course, said management team had some odd ideas of quality--(1) The release
date is the driving goal; (2) Quality lies with Continuation Engineering (the
after-it-is-released bug fixers), NOT with the Development team; and (3) you
really don't need many people to properly support a product.

Said company has gone down the toilet in a big way over the last number of
months....

INET: pj@pnet51.orb.mn.org   -OR-    pejacoby@3m.com