dg1v+@ANDREW.CMU.EDU (David Greene) (06/15/88)
Date: Tue, 14 Jun 88 08:18 EDT From: David Greene <dg1v+@andrew.cmu.edu> To: ailist@kl.sri.com Subject: Re: Me, Karl, Stephen, Gilbert In-Reply-To: <digest.0Wh7fTy00UkcQ0W1RU@andrew.cmu.edu> In AIList Digest V7 #29, Stephen Smoliar writes: > What have all those researchers who don't spend so much > time with computer programs have to tell us? I'm not advocating Mr. Cockton's views, but the limited literature breadth in many AI papers *is* self-defeating. For example, until very recently, few expert system papers acknowledged the results of 20+ years of psychology research on Judgement and Decision Making. It seems odd that AI people studying experts decision making would not reference behavioral/ performance research on human/ expert decision making. The works of Kahneman, Tversky, Hogarth and Dawes (to name some luminaries), all identify inherent flaws in human (including experts') judgement. These dysfunctional biases result in consistent suboptimal decision rules across many realistic conditions (setting aside debates on "optimality"). Yet, AI researchers and knowledge engineers attempt to produce fidelity to the expert and compare the resultant system to the experts performance. Is it a wonder that many ES's don't work in the field... Perhaps a broader literature/ research exposure could be advantageous to AI (or any field)... -David dg1v@andrew.cmu.edu Carnegie Mellon "You're welcome to use my oppinions, just don't get them all wrinkled..."