[comp.ai] interviewing experts

smoliar@vaxa.isi.edu (Stephen Smoliar) (01/31/88)

I just read the article, "How to Talk to an Expert," by Steven E. Evanson
in the February 1988 issue of AI EXPERT.  While I do not expect profound
technical insights from this magazine, I found certain portions of this
article sufficiently contrary to my own experiences that I decided a
bit of flaming was in order.  Mr. Evanson is credited as being "a
practicing psychologist in Monterey, Calif., who works in the expert
systems area."  Let me being with the observation that I am NOT a
practicing psychologist, nor is my training in psychology.  What I
write will be based primarily on the four years of experience I had
at the Schlumberger-Doll Research Laboratory in Ridgefield, Connecticut
during which I had considerable opportunity to interact with a wide
variety of field experts and to attempt to implement the results of
those interactions in the form of software.

Mr. Evanson dwells on many approaches to getting an expert to explain
himself.  For the most part, he address himself to the appropriate sorts
of probing questions the interviewer should ask.  Unfortuntely, one may
conclude from Mr. Evanson's text that such interviewing is a unilateral
process.  The knowledge engineer "prompts" the expert and records what
he has to say.  Such a practice misses out on the fact that experts are
capable of listening, too.  If a knowledge engineer is discussing how an
expert is solving a particular problem;  then it is not only valuable, but
probably also important, that the interviewer be able to "play back" the
expert's solution without blindly mimicking it.  In other words, if the
interviewer can explain the solution back to the expert in a way the
expert finds acceptable, then both parties can agree that the information
has been transferred.  This seems to be the most effective way to deal
with one of Mr. Evanson's more important observations:

	It is very important for the interviewer to understand
	how the expert thinks about the problem and not assume
	or project his or her favored modes of thinking into the
	expert's verbal reports.

Maintaining bilateral communication is paramount in any encounter with an
expert.  Mr. Evanson makes the following observation:

	Shallowness of breathing or eyes that appear to defocus
	and glaze over may also be associated with internal
	visual images.

Unfortunately, it may also indicate that the expert is at a loss at the
stage of the interview.  It may be that he has encountered an intractable
problem, but another possibility it that he really has not processed a
question from the interviewer and can't figure out how to reply.  If
the interviewer cannot distinguish "deep thought" from "being at a loss,"
he is likely to get rather confused with his data.  Mr. Evanson would have
done better to cultivate an appreciation of this point.

It is also important to recognize that much of what Mr. Evanson has to say
is opinion which is not necessarily shared "across the board."  For
example:

	As experts report how they are solving a problem, they
	translate internal experiences into language.  Thus
	language becomes a tool for representing the experiences
	of the expert.

While this seems rather apparent at face value, we should bear in mind that
it is not necessarily consistent with some of the approaches to reasoning
which have been advanced by researchers such as Marvin Minsky in his work
on memory models.  The fact is that often language can be a rather poor
medium for accounting for one's behavior.  This is why I believe that it
is important that a knowledge engineer should raise himself to the level
of novice in the problem domain being investigated before he even begins
to think about what his expert system is going to look like.  It is more
important for him to internalize problem solving experiences than to simply
document them.

In light of these observations, the sample interview Mr. Evanson provides
does not serve as a particularly shining example.  He claims that he began
an interview with a family practice physician with the following question:

	Can you please describe how you go about making decisions
	with a common complaint you might see frequently in your
	practice?

This immediately gets things off on the wrong foot.  One should begin with
specific problem solving experiences.  The most successful reported interviews
with physicians have always begun with a specific case study.  If the
interviewer does not know how to formulate such a case study, then he
is not ready to interview yet.  Indeed, Mr. Evanson essentially documents
that he began with the wrong question without explicitly realizing it:

	This question elicited several minutes of interesting
	unstructured examples of general medical principles,
	data-gathering techniques, and the importance of a
	thorough examination but remained essentially unanswered.
	The question was repeated three or four times with
	slightly different phrasing with little result.

From this point on, the level of credibility of Mr. Evanson's account
goes downhill.  Ultimately, the reader of this article is left with
a potentially damaging false impression of what interviewing an expert
entails.

One important point I observed at Schlumberger is that initial interviews
often tend to be highly frustrating and not necessarily that fruitful.
They are necessary because of the anthropological necessity of establishing
a shared vocabulary.  However, once that vocabulary has been set, the
burden is on the knowledge engineer to demonstrate the ability to use
it.  Thus, the important thing is to be able to internalize some initial
problem solving experience enough so that it can be implemented.  At
this point, the expert is in a position to do something he is very good
at:  criticize the performance of an inferior.  Experts are much better
at picking apart the inadequacies of a problem which is claiming to
solve problems than at giving the underlying principles of solution.
Thus, the best way to get information out of an expert is often to
give him some novice software to criticize.  Perhaps Mr. Evanson has
never built any such software for himself, in which case this aspect
of interacting with an expert may never have occurred to him.

fordjm@byuvax.bitnet (02/05/88)

     
Note:  The following article is from both Larry E. Wood and John
M. Ford of  Brigham Young University.
     
We have also recently read Evanson's AI Expert article on
interviewing experts  and feel that some discussion of this topic
would prove useful.  Relative to Steve  Smoliar's reactions, we
feel it is appropriate to begin with a disclaimer of sorts.  As
cognitive psychologists, we hope those reading Evanson's article
will not judge the potential contributions of psychologists by
what they find there.  Some of the points Evanson  chooses to
emphasize seem counterintuitive (and perhaps counterproductive)
to us as well.  We attribute this in part to his being a
practicing clinician rather than a specialist in cognitive
processes.

On a more positive note, as relative newcomers to the newly
emerging field of knowledge engineering (two years), we do
believe that there are social science disciplines which can make
important contributions to the field.  These disciplines include
cognitive science research  methodology, educational measurement
and task analysis, social science survey  research,
anthropological research methods, protocol analysis, and others.
     
While knowledge elicitation for the purpose of building expert
systems (or  other AI applications) has its own special set of
problems, we believe that these social science disciplines have
developed some methods which knowledge engineers  can adapt to
the task of knowledge elicitation and documentation.  Two
examples of such interdisciplinary  "borrowing" which are
presently influencing knowledge engineering are the  widespread
use of protocol analysis methods (see a number of articles in
this  year's issues of the International Journal of Man-Machine
Studies) and the  influence of anthropological methods and
perspectives (alluded to by Steve  Smoliar in his previous
posting and represented in the work of Marriane  LaFrance, see
also this year's IJM-MS).  It is our belief that there are other
areas in the social sciences which can make important
contributions, but which  are not yet well known in AI circles.
     
This is *not* intended as a blanket endorsement of approaches to
knowledge  elicitation based on social science disciplines.  We
do, however,  believe that it is important for practicing
knowledge engineers to attend to methodologies developed outside
of AI so that they can spend their time  refining and extending
their application to AI rather than "reinventing the  wheel."

We have a paper in preparation  which addresses some of these
issues.


Larry E. Wood                      John M. Ford
woodl@byuvax.bitnet                fordjm@byuvax.bitnet
     

garyb@hpmwtla.HP.COM (Gary Bringhurst) (02/09/88)

(for the nasty line eating bug)

Warning: flaming ahead

As a (modest) computer scientist, I always find it disturbing to read
condescending remarks like those of professors Wood and Ford, who have, by
their own admission, been involved in AI only a short time (two years).
 
>We
>do, however,  believe that it is important for practicing
>knowledge engineers to attend to methodologies developed outside
>of AI so that they can spend their time  refining and extending
>their application to AI rather than "reinventing the  wheel."

I agree with this statement, as I believe any professional should try to expand
his area of expertise as far as possible.  Would I be out of place to ask
that cognitive psychologists who wish to contribute to AI study a little
computer science in return?

I have actually taken a class from Dr. Wood, and unless his depth of knowledge
in the field of computer science has increased significantly since early 1987,
I would find it very hard to give much weight to anything he says.

>Larry E. Wood                      John M. Ford
>woodl@byuvax.bitnet                fordjm@byuvax.bitnet

I suppose I'm just tired of well meaning zealots jumping into the foray.
The AI bandwagon is loaded heavily enough as is.  Let's lighten the load
a little.

Gary L. Bringhurst

(DISCLAIMER:	My opinions do not, in general, bear any resemblance at all
		to the opinions of my employer, which actually has none.)

gilbert@hci.hw.ac.uk (Gilbert Cockton) (02/15/88)

In article <2300001@hpmwtla.HP.COM> garyb@hpmwtla.HP.COM (Gary Bringhurst) writes:
>
>Would I be out of place to ask that cognitive psychologists who wish to 
> contribute to AI study a little computer science in return?
Hear! hear! (and some psychology too :-) )
>
>I suppose I'm just tired of well meaning zealots jumping into the foray.
 (reference to Knoweldge Engineering tutors with no computing knowledge)

Whilst sceptical about much AI, it's my opinion that in 10 years time,
Knowledge Engineering will be seen as one of the most important
contributions of AI to Systems Design.  Why? - because the skills
required for succesful knowledge elicitation are applicable to ALL
systems design.  The result is that computer specialists who would
never have attended 'useless' courses or read up on 'Participative design' 
and 'end-user involvement' have been seduced into learning about some
central skills in these design approaches (KE is still weak on
organisational/operations modelling though).  So, even if Expert
Systems never become the dominant systems technology, we will have
more systems specialists who do know how to find out what people
want. So, those well-meaning zealots, ignorant of computing, but
knowlegeable about human issues, have, in the promise of Intelligent
Systems and big profits, at last found a way to influence and educate
more computing professionals.  Pass the quiche!
-- 
Gilbert Cockton, Scottish HCI Centre, Heriot-Watt University, Chambers St.,
Edinburgh, EH1 1HX.  JANET:  gilbert@uk.ac.hw.hci   
ARPA: gilbert%hci.hw.ac.uk@cs.ucl.ac.uk UUCP: ..{backbone}!mcvax!ukc!hci!gilbert