[comp.text.desktop] How to run a Beta Test

news@sun.UUCP (07/09/87)

[This is a little sideways to the general topic, but it grew out of some
mail I've been having with one of the software manufacturers for the Mac
that is on the net, and there seems to be a fair amount of interest in the
topic since my posting on setting up a Beta Testing service for USENET.

	-- chuq]

>It's always difficult finding *good*, reliable beta testers.  We have
>had problems in this area.  Our product is a good example.  Looking
>back at it now, it seems to me that a large number of the "beta-testers"
>signed up to test the product were really only interested in getting a copy
>of the program for their own use, way before any one else would get it.

>Several individuals *did* do a good job of reporting bugs, giving
>detailed descriptions and follow-ups.  But this was a *significant* minority!
>Most we didn't even hear from!

>Another problem we have run into is the blatant disregard for a product's
>copyright.  I know that a *great many* copies of our product out there
>today are bootleg "beta" copies!

>The result of all this?  I'm justifiably wary of people purporting to
>be "beta-testers".  This is not to say that it can't work - just that it
>must be well organized and well supported.

I am (in case you didn't know) among other things a professsional Beta
Tester -- before a new release of Unix goes to Sun customers, I get to
try to break it first.

There are really two types of Beta testers, the technical folks, whose
job is to give feedback and help solidify the product, and the special
customers, who get early releases to gripe about.

Do you just hand out software and tell people to give you feedback?
You're asking for problems.  Here is generally what I would suggest to
anyone doing a Beta Test.  I could probably be convinced to help
someone design a Beta Test on a consulting basis if they really want me
to, but this should help folks avoid most of the pitfalls..

o Decide how many beta testers will be enough.  And stick to it. You'd
    be surprised how many extra copies go out sometimes. Then again,
    maybe not.
    
o If your marketing folks insist, let the marketing folks support them,
    and stick only to the technical sites.  The others generally aren't
    worth the time spent poring over their reports. 
    
o If you need more than 10 beta sites to get coverage of the product,
    you're choosing the wrong beta sites.

o Build up a prospective list of your customers and whoever else you
    know that might qualify.  Pass that list around the company and get
    feedback, especially from the developers, and support groups.  If
    anyone on the list is flagged as a problem customer, get them off
    the list (this means if ONE person flags them -- blackballs are
    appropriate). You want customers who are working with and for you
    -- and only those people.

o Interview them.  Get them to explain why they're qualified, how much
    time they plan on dedicating, and what they hope to accomplish.

o The folks left over are the candidates.  They get to sign
    non-disclosures.  They can't talk about the beta software. They
    can't admit to having it.  They can't talk about it. They can't
    write reviews about the pre-release software.

	[you do NOT want to have happen to you what happened to
	Microsoft with Word 3.0 -- all those authors who said "neat
	toy!" and are now looking silly are overreacting and saying
	"horrible garbage!" -- primarily because the authors blew it
	and are trying to save face.  Word was neither as good as they
	say nor as bad as they are trying to make it -- what really got
	screwed up was a lack of journalistic professionalism
	throughout the magazines.  Unfortunately, Microsoft is getting
	the brunt of it because it seems the reporters aren't willing
	to admit they blew it and are blaming Microsoft for many of the
	self-generated problems

	Did you, by the way, see what MacUser did to Pagemaker 2.0?
	Their pre-release software came with a "no review" clause, so
	they didn't review it -- they simply talked about the new
	features and how they planned on reviewing it when they got the
	real software.  That's borderline legal, depending on the
	wording of the contract -- but it is still slimy.

	This is a great example of why this needs to be in writing, so
	that people don't get cute on you. And why they can't talk
	about it as well as review it.  It may sound hardnose, but
	there are folks who've proven that hardnose is necessary.

	It is also the major factor in why I'm nor renewing my
	subscription -- if they've gotten this bad in their reporting,
	I don't need them.]

    They can't give it to friends.  They are there to test software. If
    they aren't willing to put that in writing, they shouldn't be Beta
    sites. And make sure it is in writing, and that they understand,
    that if you find out they've broken the contract, you are going to
    have them ship back the software. Have your lawyer write up a Beta
    Test contract that spells out the rights and responsbilities
    clearly. I can't stress this step enough -- words and a handshake
    DO NOT WORK.


o Part of the beta agreement is an agreement to do a weekly written
    report.  Every beta tester will send in a weekly report of problems
    and comments, or a piece of paper that says "nothing sigificant
    found" or full of recipes or something.  They should also have a
    contact in the company to call as soon as a problem is found, but
    the written report is backup that something didn't get dropped on
    the floor.  It is also a good check that the beta tester is beta
    testing. If they miss two reports, pull the software or get a good
    reason.

    This sounds hardnosed, but the practical fact is that most beta
    sites are useless, either because the wrong testers were chosen for
    the wrong reasons, the information returned was ignored, lost, or
    generally discounted, the information wasn't returned, or the
    schedules are unrealistic.

A few well known things to avoid in beta testing:

o do not beta test until all of the software is ready.  Trying to do
    development AND bug fixing generally doesn't work.  There is a good
    purpose for an alpha (pre-beta) test, but it needs to be handled
    differently (and accepted by all concerned as different).

o do not beta test until the manuals are written.  Don't try to shorten
    schedules by testing the software separately -- the manuals tend to
    need at least as much beta testing as the software.

o Do not scrimp on testing time.  Anyone who spends a year developing a
    piece of software should realize that it can't be tested in four
    weeks. There is a definite tradeoff between exhaustive testing and
    getting to market, but if you try to make up a late schedule (which
    implies hurried development) by cutting testing, you might as well
    slit your throat.  Beta testers need time to install the software,
    read the manuals, get familiar with the software, and really start
    beating on it.  Figure 8 weeks, minimum.  Hopefully 12.

    Let me re-emphasize this.  If the development schedule slips a
    month, and you try to make up the time by cutting the length of the
    Beta Test, you will live to regret it. I would even suggest that
    the later the product, the longer the Beta Test, but I would
    probably give everyone in Marketing a heart attack.  In practical
    terms, you have a choice -- you can either test it before shipping
    it, or let your customers test it for you -- and expect to have to
    do an emergency second release shortly after. That's expensive -- if
    you don't believe me, ask Microsoft what Word 3.1 is going to cost them.

o Developers can not test. The people who write the software can not
    successfully test a product.  While this may sound illogical, they
    are too familiar with the product, and know how it is supposed to
    work.  It is generally the places where the developer didn't think
    about using it that things get a bit creaky -- and if the developer
    had thought about it, he would have programmed for it.

o finally, get a complete novice.  If the product is designed for the
    general marketplace, get a complete novice and let them free.  call
    Kelly and hire a temp, put them in front of a machine with a manual
    and watch.  If they get confused, your customers will, too.  Is the
    program confusing, the manual incomplete, do you need a (better)
    tutorial?  Give the person a week of completely unstructured test
    and see what happens -- that is the best view of what will happen
    once you ship that I can think of.

chuq

----------------------------------------
Submissions to:   desktop%plaid@sun.com -OR- sun!plaid!desktop
Administrivia to: desktop-request%plaid@sun.com -OR- sun!plaid!desktop-request
Paths:  {ihnp4,decwrl,hplabs,seismo,ucbvax}!sun
Chuq Von Rospach	chuq@sun.COM		Delphi: CHUQ

Touch Not the Cat Bot a Glove -- MacIntosh Clan Motto

davidsen@steinmetz. (07/27/87)

In article <23056@sun.uucp> news@sun.uucp (news) writes:
|o Build up a prospective list of your customers and whoever else you
|    know that might qualify.  Pass that list around the company and get
|    feedback, especially from the developers, and support groups.  If
|    anyone on the list is flagged as a problem customer, get them off
|    the list (this means if ONE person flags them -- blackballs are
|    appropriate). You want customers who are working with and for you
|    -- and only those people.

Having done beta testing and early release programs for INteractive,
IBM, Solution Systems and Microsoft, I disagree. Technical support may
hate someone because "that SOB is *always* calling with problems." If
they are legitimate problems in the product or documentation you
probably want this guy. You should be selecting on the basis of ability
to find, replicate, and report problems, not on what a nice guy s/he is.

The good beta testers are either totally incompetent, beginners, or have
devious minds. I once had a vendor tell me that I could "find failure
modes in a bowling ball." I *hope* that was a complement.

|o Part of the beta agreement is an agreement to do a weekly written
|    report.  Every beta tester will send in a weekly report of problems
|    and comments, or a piece of paper that says "nothing sigificant
|    found" or full of recipes or something.  They should also have a
|    contact in the company to call as soon as a problem is found, but
|    the written report is backup that something didn't get dropped on
|    the floor.  It is also a good check that the beta tester is beta
|    testing. If they miss two reports, pull the software or get a good
|    reason.

I hope you intended to include email "written reports". I have used them
now, and I think they are much better than paper, because the tester can
easily include a code fragment with a report.
===
I agree with the rest of what you said (or at least don't disagree).
Thanks for sharing this with us, I suspect many people have never had a
guidance in this area.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me
----------------------------------------
Submissions to:   desktop%plaid@sun.com -OR- sun!plaid!desktop
Administrivia to: desktop-request%plaid@sun.com -OR- sun!plaid!desktop-request
Paths:  {ihnp4,decwrl,hplabs,seismo,ucbvax}!sun
Chuq Von Rospach	chuq@sun.COM		Delphi: CHUQ

We live and learn, but not the wiser grow -- John Pomfret (1667-1703)

sow%luthcad.UUCP@uunet.UU.NET (08/03/87)

>o Build up a prospective list of your customers and whoever else you
>    know that might qualify.  Pass that list around the company and get
>    feedback, especially from the developers, and support groups.  If
>    anyone on the list is flagged as a problem customer, get them off
>    the list (this means if ONE person flags them -- blackballs are
>    appropriate). You want customers who are working with and for you
>    -- and only those people.

I disagree, you need people who have problems with your products. If they
have problems you have to find the reason. Perhaps they just run other
applications on the system then you do. Which perhaps results in that
they run into new bugs. We are a beta test site and I one of the nasty
guys who always run into problems. But they offered us to be a beta test
site in spite of me. 
	
You also need some new beta testers each time, sice the old one runs the
same tests as last time. I presume that the developers learns something
from the history. By the way have you tested "sed -e" on a Sun??? :-)

>o Part of the beta agreement is an agreement to do a weekly written
>    report.  Every beta tester will send in a weekly report of problems
>    and comments, or a piece of paper that says "nothing sigificant
>    found" or full of recipes or something.  They should also have a
>    contact in the company to call as soon as a problem is found, but
>    the written report is backup that something didn't get dropped on
>    the floor.  It is also a good check that the beta tester is beta
>    testing. If they miss two reports, pull the software or get a good
>    reason.

You talk about the beta testers dutys. But you should also send reports
back to the beta tester about the current status, containing fixed,
remaining and new  bugs (all bugs not only this beta testers).
We send weekly reports and got weekly reports and software updates.

>o Do not scrimp on testing time.  Anyone who spends a year developing a
>    piece of software should realize that it can't be tested in four weeks.

Is it possible to settle the testing time? What do you do if you got 
a lot of new bugs each week when the scheduled time ends?

Sven-Ove Westberg, CAD, University of Lulea, S-951 87 Lulea, Sweden.
Tel:     +46-920-91677  (work)                 +46-920-48390  (home)
UUCP:    sow@luthcad.UUCP  or  seismo!mcvax!enea!luthcad!sow
ARPA:    enea!luthcad!sow@seismo.css.gov

	[moderator's kibbitz: You misundestand my statement -- I agree
	with you completely that you need beta testers which have
	problems with your program -- but you do NOT need beta testers
	who ARE problems, which is what I was talking about
	
	And yes, feedback TO the beta testers is important, too, and
	companies are bad at this as well.  But it is also more 
	difficult unless someone is in charge of collating and
	distributing the material back out
	
	If you're still getting lots of new bugs when the scheduled testing
	time ends, you don't have a product, you have a problem. You have
	two choices: slip schedule and fix the thing, or ship it anyway and
	expect to get roasted by your customers. A pissed-off customer can
	easily become an ex-customer.  Is it worth the risk? -- chuq]

----------------------------------------
Submissions to:   desktop%plaid@sun.com -OR- sun!plaid!desktop
Administrivia to: desktop-request%plaid@sun.com -OR- sun!plaid!desktop-request
Paths:  {ihnp4,decwrl,hplabs,seismo,ucbvax}!sun
Chuq Von Rospach	chuq@sun.COM		Delphi: CHUQ

We live and learn, but not the wiser grow -- John Pomfret (1667-1703)