[mod.ai] Addressing some of Dreyfus' specific points

RUSSELL@SUMEX-AIM.ARPA (Stuart Russell) (03/02/86)

To address some of the actual content of Dreyfus' recent talk at Stanford,
delivered to an audience consisting mostly of AI researchers:

1) The discussion after the talk was remarkably free of strong dissent, for
the simple reason that Dreyfus is now making a sloppy attempt at a
cognitive model for AI, rather than making any substantive criticism of AI.
Had his talk been submitted to AAAI as a paper, it would probably have been
rejected as containing no new ideas and weak empirical backing.

2) The backbone of his argument is that human *experts* solve problems by
accessing a store of cached, generalized solutions rather than by extensive
reasoning. He admits that before becoming expert, humans operate just like
AI reasoning systems, otherwise they couldn't solve any problems and thus
couldn't cache solutions. He also admits that even experts use reasoning
to solve problems insufficiently similar to those they have seen before.
He doesn't say how solutions are to be abstracted before caching, and 
doesn't seem to be aware of much of the work on chunking, rule compilation,
explanation-based generalization and macro-operator formation which has
been going on for several years. Thus he seems to be proposing a performance
mechanism that was proposed long ago in AI, acts as if he (or his brother)
invented it and assumes, therefore, that AI can't have made any progress yet 
towards understanding it.

3) He proposes that humans access previous situations and their solutions
by an "intuitive, holistic matching process" based on "total similarity"
rather than on "breaking down situations into features and matching on
relevant features". When I asked him what he meant by this, he said
he couldn't be any more specific and didn't know any more than he'd said.
(He taped our conversation, so he can no doubt correct the wording.)
In the talk, he mentioned Roger Shepard's work on similarity (stimulus
generalization) as support for this view, but when I asked him how the
work supported his ideas, it became clear that he knew very little about it.
Shepard's results can be explained equally well if situations are
described in terms of features, but more importantly they only apply when
the subject has no idea of which parts of the situation are relevant to the
solution, which is hardly the case when an expert is solving problems. In
fact, the fallacy of analogical reasoning by total similarity (which is the
only mechanism he is proposing to support his expert phase of skilled
performance) has long been recognized in philosophy, and also more recently
in AI. Moreover, the concept of similarity without any goal context (i.e.
without any purpose for which the similarity will be used) seems to be
incoherent. Perhaps this is why he doesn't attempt to define what it means.

4) His final point is that such a mechanism cannot be implemented in a
system which uses symbolic descriptions. Quite apart from the fact that
the mechanism doesn't work, and cannot produce any kind of useful
performance, there is no reason to believe this, nor does he give one.

In short, to use the terminology of review forms, he is now doing AI but
the work doesn't contain any novel ideas or techniques, does not report
on substantial research, does not properly cite related work and does
not contribute substantially to knowledge in the field. If it weren't
for the bee in his bonnet about proving AI (except the part he's now doing)
to be fruitless and dishonest, he might be able to make a useful
contribution, especially given his training in philosophy.

Stuart Russell
Stanford Knowledge Systems Lab
-------