[net.ai] 'Explaining' expert system algor

marcel@uiucdcs.UUCP (marcel ) (03/27/84)

#R:eosp1:-71500:uiucdcs:32300022:000:939
uiucdcs!marcel    Mar 26 19:16:00 1984

There is no need for expert system software to be well-understood by anyone but
its designers; there IS a need for systems to be able to explain THEMSELVES.
Witness human thinking: after 30 years of serious AI and much more of cognitive
psychology, we still don't know how we think, but we have relatively little
trouble getting people to explain themselves to us.

I think that we will not be able to produce such self-explanatory software
until we come up with a fairly comprehensive theory of our own mental workings;
which is, admittedly, not the same as understanding an expert program. On the
other hand, if you're a theoretical sort you tend to accept Occam's razor, and
so I believe that such a theory of cognition will be as simplifying as the
Copernican revolution was for astronomy. Thereafter it's all variations on a
theme, and expert systems too will one day be correspondingly easy.

								Marcel S.
								U of Illinois