[comp.admin.policy] User Satisfaction ?

uudot@ariel.lerc.nasa.gov (Dorothy Carney) (05/21/91)

Our upper management, which is not very computer literate, wants to
receive quarterly reports (oral: 15 minutes!) which are "METRICs" of
customer satisfaction.  Our customers are hundreds of researchers and
engineers, as well as secretaries and office staff.

Do any of you have metrics for user satisfaction?  

Somehow, telling upper managers about the mean time between failures
or the integrity of disk data doesn't do it.  Neither does surveying
a random sampling of users with insipid questions like "On a scale
of 1 to 10 ...".

We do have a Help Desk which tracks telephone requests for help ...
but focusing the metrics on problem reports would be negative, and we
want to be positive.

de5@ornl.gov (Dave Sill) (05/21/91)

In article <1991May21.152727.27423@eagle.lerc.nasa.gov>, uudot@ariel.lerc.nasa.gov (Dorothy Carney) writes:
>
>Our upper management, which is not very computer literate, wants to
>receive quarterly reports (oral: 15 minutes!) which are "METRICs" of
>customer satisfaction.

Ask upper management want they want to see, rather trying to
second-guess them.

>Do any of you have metrics for user satisfaction?  

How about response-time statistics for problem calls?

>Somehow, telling upper managers about the mean time between failures
>or the integrity of disk data doesn't do it.  Neither does surveying
>a random sampling of users with insipid questions like "On a scale
>of 1 to 10 ...".

How about system availability?  E.g., a table showing the
uptime/downtime for each system.

A comprehensive direct user survey can be dome occasionally, but most
surveys get terrible responses (like 10% are returned), so do it
infrequently and make a big deal out of it.  Perhaps combine it with
some spot interviews. 

>We do have a Help Desk which tracks telephone requests for help ...
>but focusing the metrics on problem reports would be negative, and we
>want to be positive.

I disagree.  There's no lonely Maytag repairman in this business.  If
you don't have users calling with questions, you don't have users
using your systems.  A statistic showing that 95% of user queries are
resolved within 24 hours would be impressive.  You could also
categorize the queries and implement a plan to address recurring
problems.  Then you could show how, e.g., queries about remote
printing have dropped X% since you published a handout on the topic.

-- 
Dave Sill (de5@ornl.gov)	  It will be a great day when our schools have
Martin Marietta Energy Systems    all the money they need and the Air Force
Workstation Support               has to hold a bake sale to buy a new bomber.

ables@hal.com (King Ables) (05/22/91)

>>Do any of you have metrics for user satisfaction?  
> 
> How about response-time statistics for problem calls?
> How about system availability?  E.g., a table showing the
> uptime/downtime for each system.

Both are good metrics of how well you're servicing your users,
but neither will tell you beans about user satisfaction.  Users
can be very dissatisfied in an environment that's up 99% of the
time if the 1% was the hour before a big presentation.  And a
*big* part of user satisfaction comes from the attitude they
perceive you have about helping them get their problem solved.
If they feel like you're doing it only because you have to, and
not because you really care about it, the satisfaction level will
be lower, no matter how well you do it.  You're heart has to at
least *look* like it's in it.  And that's hard to measure, at least
objectively.

> A comprehensive direct user survey can be dome occasionally, but most

Yes, if you can get people to take it seriously, this is one of the
best tools... like he said, do it infrequently and make a big deal about it.

> problems.  Then you could show how, e.g., queries about remote
> printing have dropped X% since you published a handout on the topic.

This is a good way to use the problem call stats!

If the presentation is really meant to estimate user satisfaction,
it's going to be real hard to back up.  You can list all the things
that have been suggested and conclude that the users "should be satisfied,"
but that's as far as you can go.  If management wants to know if people
are satisfied, maybe they should ask them themselves.  ;-)

If you don't have some kind of e-mail address or newsgroup for local
suggestions, you might want to add that.  I think people are usually
willing to suggest improvements when it's easy enough.  How much traffic
showed up there would be an indicator of how comfortable people are with
the way things currently work (assuming you could be sure people would
really use it if they felt the need).

Just the rantings of a former user-support-type, for what it's worth.
-king

ghm@ccadfa.adfa.oz.au (Geoff Miller) (05/22/91)

de5@ornl.gov (Dave Sill) writes:

>In article <1991May21.152727.27423@eagle.lerc.nasa.gov>, uudot@ariel.lerc.nasa.gov (Dorothy Carney) writes:
>>
>>Our upper management, which is not very computer literate, wants to
>>receive quarterly reports (oral: 15 minutes!) which are "METRICs" of
>>customer satisfaction....[stuff deleted]...
>>We do have a Help Desk which tracks telephone requests for help ...
>>but focusing the metrics on problem reports would be negative, and we
>>want to be positive.

>I disagree.  There's no lonely Maytag repairman in this business.  If
>you don't have users calling with questions, you don't have users
>using your systems.  A statistic showing that 95% of user queries are
>resolved within 24 hours would be impressive.  You could also
>categorize the queries and implement a plan to address recurring
>problems.  Then you could show how, e.g., queries about remote
>printing have dropped X% since you published a handout on the topic.

The point which Dave makes implicitly is that you *don't* call them 
"problem reports".  You talk about "client enquiries" or some similarly
pompous phrase (depending on the bullshitability of upper management)
which puts the emphasis on the service you provide rather than the 
problems which the users have found.  However, to do this you do need
some method of monitoring the enquiries that come in, and probably the
best way is to set up a formal "help desk" through which all enquiries
are funnelled and logged.

Geoff Miller  (ghm@cc.adfa.oz.au)
Computer Centre, Australian Defence Force Academy

steveg@welch.jhu.edu (Steve Grubb) (05/22/91)

In article <1991May21.152727.27423@eagle.lerc.nasa.gov> uudot@ariel.lerc.nasa.gov writes:
>
>Do any of you have metrics for user satisfaction?  
>
>....focusing the metrics on problem reports would be negative, and we
>want to be positive.


How about:  Amount of down time
	    Statistics on response time and load
	    Summarize what applications people are
		using the system for and how these
		contribute to getting things done
	    Statistics on email use
	    I think it would be a mistake to not
		include a summary of help desk activity,
		including turn-around time for "critical"
		or "level-one" problem resolution.

Generally, folks in upper management aren't dummies, and will
be suspicious of a "hurray-for-us" type of report, so don't 
worry about mixing negative with the positive.

  Steve
-- 
 _____________   ____________________________   ______________________
( Steve Grubb ) ( Johns Hopkins Sch. of Med. ) ( steveg@welch.jhu.edu )
 -------------   ----------------------------   ----------------------

-- 
 _____________   ____________________________   ______________________
( Steve Grubb ) ( Johns Hopkins Sch. of Med. ) ( steveg@welch.jhu.edu )
 -------------   ----------------------------   ----------------------

gerwitz@hpcore.Kodak.Com (Paul Gerwitz) (05/22/91)

In article <1991May21.152727.27423@eagle.lerc.nasa.gov>, uudot@ariel.lerc.nasa.gov (Dorothy Carney) writes:
|> 
|> 
|> Our upper management, which is not very computer literate, wants to
|> receive quarterly reports (oral: 15 minutes!) which are "METRICs" of
|> customer satisfaction.  Our customers are hundreds of researchers and
|> engineers, as well as secretaries and office staff.
|> 
|> Do any of you have metrics for user satisfaction?  
|> 
|> Somehow, telling upper managers about the mean time between failures
|> or the integrity of disk data doesn't do it.  Neither does surveying
|> a random sampling of users with insipid questions like "On a scale
|> of 1 to 10 ...".
|> 
|> We do have a Help Desk which tracks telephone requests for help ...
|> but focusing the metrics on problem reports would be negative, and we
|> want to be positive.

How about number of calls to you help desk,

Percentage of problems resolved within some defined time frame (1 day)

Upon completion of a call, contact the customer with a short (1-3 question)
survey getting their "overall satisfaction rating", etc

NOTE: the purpose of metrics is not just to inform mgmt, but also to
identify opportunities for improvement which I believe is really what you
management is trying to get you to see.
-- 
 +----------------------------------------------------------------------------+
 | Paul F Gerwitz  WA2WPI  | SMTP: gerwitz@kodak.com                          |
 | Eastman Kodak Co        | UUCP: ..uunet!atexnet!kodak!eastman!gerwitz      |
 +----------------------------------------------------------------------------+

jerry@talos.npri.com (Jerry Gitomer) (05/23/91)

uudot@ariel.lerc.nasa.gov (Dorothy Carney) writes:



:Our upper management, which is not very computer literate, wants to
:receive quarterly reports (oral: 15 minutes!) which are "METRICs" of
:customer satisfaction.  Our customers are hundreds of researchers and
:engineers, as well as secretaries and office staff.

:Do any of you have metrics for user satisfaction?  

	I don't happen to have any metrics handy, but I have had similar
	assignments in the past.  Based on my experience I believe that
	"User satisfaction" is based on perception rather than reality.
	The first thing you have to do is get your users to tell you 
	what they want -- in quantifiable terms.  For example, if they 
	tell you that terminal response time is to long ask them "How 
	long should it be?".  Sure, you will have a lot of data to weed
	through (some of it ridiculous), but you will then be able to
	establish some quantifiable goals and compare your performance
	to the goals.

:Somehow, telling upper managers about the mean time between failures
:or the integrity of disk data doesn't do it. 

	You're right.  As someone else pointed out it is more important
	not to fail at the wrong time than not to fail at all.  Also both
	managers and users expect the system to maintain the integrity of
	the data.  (If your system doesn't you have some real problems
	that you better take care of NOW).
	
:Neither does surveying
:a random sampling of users with insipid questions like "On a scale
:of 1 to 10 ...".

	Yes, but this isn't a bad idea for an initial survey.  Take 
	the 1 to 10s and formulate questions with meaningful choices.
	For example on the initial survey you might ask "Please rate
	terminal response time on a scale of one to ten"  If the bulk 
	of your users respond in the range of 8 to 10 you can report
	to management that terminal response time is satisfactory and
	go on to something else.  If the bulk of your responses are 7
	or lower you might come back with a question on the second
	survey which asks the users to pick an acceptable terminal 
	response time from a number of choices such as: under 5 seconds,
	6 to 10 seconds, etc.  (If they all ask for under 5 seconds and
	your system can't support better than 10 seconds -- tell top
	management that the users can't be satisfied unless you obtain
	more equipment).  In any event this will give you a set of goals
	to measure against.

:We do have a Help Desk which tracks telephone requests for help ...
:but focusing the metrics on problem reports would be negative, and we
:want to be positive.

	How about the percentage of your users who don't call the Help
	Desk?  If they don't need help they at least know how to use
	the system to get their jobs done.  
-- 
Jerry Gitomer at National Political Resources Inc, Alexandria, VA USA
I am apolitical, have no resources, and speak only for myself.
Ma Bell (703)683-9090  (UUCP:  ...uunet!uupsi!npri6!jerry )

vince@bcsaic.UUCP (Vince Skahan) (05/25/91)

In article <2299@talos.npri.com> jerry@talos.npri.com (Jerry Gitomer) writes:
>	How about the percentage of your users who don't call the Help
>	Desk?  If they don't need help they at least know how to use
>	the system to get their jobs done.  

Be careful that you know what satisfaction you're trying to measure.
Your service...their training...the whole environment fro mthe user's
perspective...

You might be trying to ensure satisfaction that you can't provide 
because the limiting factor is not under your control.

you need to do a few things to get REAL metrics:
	- ALL calls must go to the hotline (need your mgt. approval)
	- ALL calls that go through the hotline and all items
		that are "walk-up" must be tracked
	- when you're done tracking, you have to categorize the 
		items...
		- problems/requests/training/enhancements
		- hw/sw/location, etc.
		- who's the call from (person and org.)
	- after you categorize them...look for patterns that you can
		identify...
		- do you get lots of mail questions from one org.?
		- if so, are they trained OK ?
		- are they just bored, or lonely, or pains in the neck?
		- are they trying to talk with one particular
			other organization ?
		- is there an unstable gateway or network in the middle?
		- are they complaining about something you don't think
			you've agreed or been chartered to provide?
		- is something broken ? (broken again?)

after a while, you can relax the "track all calls" rule to "track all
calls within reason".

You also need to be able to define what calls you WILL NOT be
responsible for.  If you have a particular package, are you responsible
for training the users how to make the package do something, or are you
just responsible or bring up the package so that it's correctly
configured?

[...example...around here, I'm not supposed to be an Interleaf weenie
	because there are people who ARE...but in my last job I had
	to be an expert in ALL the packages...because there wasn't
	anyone designated to be one...]

Do you want to hear that the printer needs paper because
it's your job to fix it (my condolences if so) or can you politely tell
them to get their lazy butts over to the printer and put the paper in
themselves (in such a manner that they say "thank you" afterward) ?

[another example...

I get lots of STUPID calls in this location because the people are 
treated as "dumb users" by their management, so they get sufficiently
(insufficient) training so that they BECOME "stupid users". I haven't
been too successful in getting the point across that if they get a
little training, they really CAN think for themselves, have more fun,
and relieve the pressure from us...making both sides happier and more
productive... ]

In a past job, I got a lot of calls from users who were new to one
organization who had management who wouldn't spring for a little 
training for them (yet who were quite prompt to bitch that they weren't
getting "their money's worth" from the computing organization (us)
even though we were spending 100% of our already insufficient time
doing the training rather than enhancing the environment).

The solution was to track EVERYTHING, no matter how painful it was.
We eventually got some "operations" organization to do the menial
stuff for us (password resets, print queues, etc.) and we could do the
project and enhancement stuff that the customer really wanted.

But we had to basically all threaten simultaneous resignations and 
back up our complaints with hard statistics regarding the silly
questions we were getting and how unpleasant an existance it was to be
stuck in the middle for us to get our management approval
and support.

(incidentally the same stats that helped out with getting the users the
training and getting us relief from the noise also totally answered the
question regarding "my organization adding value or not"...I pulled
out the reports with the accusing customer_manager and showed him thea
calls, categories, callers, resolution time, etc. as well as the
non-hotline stuff we were doing for them...and instead of cutting our
funding, he went to bat for ADDING funding).

You have to either force all calls through a hotline that tracks things
or discipline yourselves to track them regardless of where the calls
come from.

(we had an interesting (to me) solution...we told them to mail all
problem reports, questions, and requests for enhancement to a "service"
account...which used the vacation program to send back an automatic
response that said something along the lines of:

	"this note is intended to provide positive confirmation that
		your electronic mail has been received by the 
		service account.

	all problems should be reported by calling xxx-xxxx (our 24x7
		operations guys).

	requests and questions are worked in priority-based order
		with the priorities negotiated on a routine basis
		between engineering and computing management."

yes...we really did meet routinely with the various organizations
to let them make the call regarding priorities (with a little help
from us if we disagreed)...doing that removed us from the spot in
the middle when the inevitable request came in that was insignificant
to everyone except that one person...we could (politely) ask them to
get their management chain to change the priorities and call us.

-- 
----------------------------------------------------------------
                         Vince Skahan   
 vince@atc.boeing.com                  ...uw-beaver!bcsaic!vince
        	(lifelong Phillies fan...pity me)

tay@hpcvlx.cv.hp.com (Mike Taylor) (05/29/91)

>>Do any of you have metrics for user satisfaction?
>
> How about response-time statistics for problem calls?
> How about system availability?  E.g., a table showing the
> uptime/downtime for each system.

Some of the information you are really interested will be tough to get.
It seems like the easiest way to measure user satisfaction is by 
measuring what your support organization hears, but that is probably
only a very limited perspective about what is going on.

I think the best way to measure user satisfaction is to talk directly 
to a sample of typical customers, and for most products those are the
customers your support organization never hears from.  You need to 
talk directly with these customers and find out both what they like and
dislike about your products/services.

The marketing manager of our local organization (my boss's boss) has
made a goal for each person in marketing to make at least two customer
visits in the next year.  That includes our support, training and 
documentation groups in addition to product marketing.

Again finding the information you really want to know regarding
customer satisfaction will be pretty tough to find.  For instance, I 
lifted this from our support group's bulletin board, but I have no
idea what proceedure they used to collect this data or how accurate
it really is:


	The HIGH COST of losing a customer:


	* For every customer who complains, 26 others remain silent

	* 91% of unhappy customers will never purchase goods or services
	  from you again

	* The average "wronged" customers will tell 8 to 16 people

	* It costs about 5 times as much to attract new customers as it
	  costs to keep old ones

	* Solve customer complaints and 82% to 95% will continue to
	  make purchases


	Source:  Technical Assistance Research Programs, Washington DC



Pax,

Mike Taylor
Current Products Engineering & Online
Interface Technology Operation

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Surface:  Hewlett-Packard                 Internet:  tay@hpcvlx.cv.hp.com
	  1000 NE Circle Boulevard        UUCP:      {hpfcla}!hpcvlx!tay
	  Corvallis, Oregon 97330         Fax:       (503) 750-4980

    "I get stranger things than you free with my breakfast cereal!" 
                                                   - Zaphod Beeblebrox