[comp.ai.digest] AI and science

JMC@SAIL.STANFORD.EDU (John McCarthy) (08/03/87)

	Like mathematics, philosophy and engineering, AI differs
from the (other) sciences.   Whether it fits someone's definition
of a science or not, it has need of scientific methods including
controlled experimentation.

	First of all, it seems to me that AI is properly part of
computer science.  It concerns procedures for achieving goals
under certain conditions of information and possibility for action.
We can even consider it analogous to linear programming.  Indeed if
achieving one's goals always consisted finding the values of
a collection of real variables that would minimize a linear
function of these variables subject to a collection of linear
inequalities, then AI would coincide with linear programming.

	However, the relation between goals, available actions,
the information initially available and that can later be acquired
is sometimes more complex than in any of the branches of computer
sciences the main character of whose scientific treatment consists
of mathematical theorems.  We don't have a mathematical formalization
of the general problem faced in AI let alone general mathematical
methods for their solution.  Indeed what we know of human intelligence
doesn't suggest that a conventional mathematical formalization of
the problems intelligence is used to solve even exists.  For this
reason AI is to a substantial degree an experimental science.

	The fact that a general mathematical formalization of the problems
intelligence solves is unlikely doesn't make mathematics useless in AI.
Many aspects of intelligence are formalizable, and languages of
mathematical logic are useful for expressing facts about the common
sense world, and logical reasoning, especially as extended by non-monotonic
reasoning is useful for drawing conclusions.

	In my view a large part of AI research should consist of the
identification and study of intellectual mechanisms, e.g. pattern
matching and learning.  The problems whose computer solution exhibits
these mechanisms should be chosen for reasons of scientific perspicuousness
analogously to the fact that genetics uses fruit flies and bacteria.
A. S. Kronrod once said that chess is the {\it Drosophila} of artificial
intelligence.  He might have been right, but the organizations that
support research have taken the view that problems should be chosen
for their practical importance.  Sometimes it is as if the geneticists
were required to do their work with elephants on the grounds that
elephants are useful and fruit flies are not.  Anyway chess has been
left to the sportsmen, most of whom only write programs, not scientific
papers and compete about who can get time on the largest computers or
get someone to support the construction of specialized chess computers.

	Donald Norman's complaints about the way AI research is
conducted have some validity, but the problem of isolating
intellectual mechanisms and making experiments worth repeating is
yet to be solved, so it isn't just a question of deciding to
be virtuous.

	Finally, I'll remark that AI is not the same as cognitive
psychology although the two studies are allied.  AI concentrates
more on the necessary relations between means and ends, while
cognitive psychology concentrates on how humans and animals
achieve their goals.  Any success in either endeavor helps the other.

	Methodology in AI is worth studying, but acceptance of its results
should be moderated by memory of the behaviorist catastrophe in
psychology.  Doctrines arising from methodological studies crippled the
science for half a century.  Indeed psychology was only rescued by ideas
arising from the invention of the computer --- and at least partly ideas
originating in AI.