taylor@hplabsc.UUCP (10/07/86)
This article is from nike!harvard!wanginst!infinet!rhorn (Rob Horn)
and was received on Tue Oct 7 02:24:40 1986
Automatic monitoring has a legitimate role as part of a well managed quality
control system. Very few people involved with monitoring are using it in this
manner, and many of the problems are the result of incompetence in quality
control - not simply because they used automatic monitoring. In quality
control the monitoring serves a crucial purpose: providing accurate feedback
on actual quality achieved. Without some form of measurement it is very hard
to control quality. (Of course there are many aspects of quality that need not
involve the kind of monitoring under discussion.)
The first guideline in examining any monitoring situation is to see whether
there is a larger quality control system involved. Can you demonstrate how
measurements are related to quality? How system changes will be tracked and
experiments controlled? Does this all relate to legitimate purposes? (Most
of the examples posted so far fail one or more of these tests.) Then, are
the measurements impartial? For a good example of impartial evaluations of
complex situations, read some air safety crash reports (look in Aviation Week).
They are very dry and impartial descriptions of what happened, what decisions
were made, and what the contributing factors were. Then for good examples of
poor evaluations, read a Congressional investigation report. They are full of
partial truths, scapegoats, quick fixes, and oversimplifications. The
automatic monitoring should be one part (NOT all) of a report similar to the
air safety reports.
I have experienced using automatic monitoring systems successfully. The
situation was a computer facility with a great many user complaints. A Q/C
plan was established, and as one part careful accurate measurements were taken
of all operational problems and activities. This involved both automatic and
manual logging. Initially the people involved were quite wary. They knew
there was a problem, but did not like being monitored. This concern evaporated
after a few months for two reasons:
1. The monitoring results were clearly impartial, and when
individuals made mistakes the response was not the creation of
scapegoats.
2. The monitoring results were clearly being used to improve
procedures, training, and quality, without unreasonable work loads.
After the system was showing results, the people became quite attached to the
monitoring. It enabled them to deal with users in a sympathetic way, and to
show real quality improvements with hard evidence instead of getting into
unpleasant emotional arguments. When I shut down monitoring after 2 years,
they actually complained about losing their monthly performance reports.
I was lucky in that I did not face a situation where conflicting goals existed
such that one could be automatically monitored while the other could not.
Telephone directory assistance is one such area. The conflicting goals are
speed and courtesy. This kind of situation is extremely hard to handle even
for the most skilled Q/C management.
Also I should note that there were exceptions to the approval of monitoring.
Two junior SA's and one operator stayed of the mindset ``I am a superuser.
I make the rules. I don't follow rules.'' About 9 months into the project
they were transferred out, still very uncooperative about monitoring. By
this time they wanted out because they were now perceived as being problems
by their peers, who were excited about the visible improvements to computer
center operation, and who did not like these three interfering with the new
system.
Rob Horn
UUCP: ...{decvax, seismo!harvard}!wanginst!infinet!rhorn
Snail: Infinet, 40 High St., North Andover, MA