[comp.ai.digest] Why AI is not a science

norman%ics@SDCSVAX.UCSD.EDU (Donald A. Norman) (07/03/87)

A private message to me in response to my recent AI List posting,
coupled with general observations lead me to realize why so many of us
otherwise friendly folks in the sciences that neighbor AI can be so
frustrated with AI's casual attitude toward theory: AI is not a science
and its practitioners are woefuly untutored in scientific method.

At the recent MIT conference on Foundations of AI, Nils Nilsson stated
that AI was not a science, that it had no empirical content, nor
claims to emperical content, that it said nothing of any emperical
value.  AI, stated Nilsson, was engineering.  No more, no less.  (And
with that statement he left to catch an airplane, stopping further
discussion.)  I objected to the statement, but now that I consider it
more deeply, I believe it to be correct and to reflect the
dissatisfaction people like me (i.e., "real scientists") feel with AI.
The problem is that most folks in AI think they are scientists and
think they have the competence to pronounce scientific theories about
almost any topic, but especially about psychology, neuroscience, or
language.   Note that perfectly sensible dsciplines such as
mathematics and philosophy are also not sciences, at least not in the
normal intrerpretation of that word.  It is no crime not to be a
science.  The crime is to think you are one when you aren't.

AI worries a lot about methods and techniques, with many books and
articles devoted to these issues.  But by methods and techniques I
mean such topics as the representation of knowledge, logic,
programming, control structures, etc.  None of this method includes
anything about content.  And there is the flaw: nobody in the field of
Artificial Intelligence speaks of what it means to study intelligence,
of what scientific methods are appropriate, what emprical methods are
relevant, what theories mean, and how they are to be tested.  All the
other sciences worry a lot about these issues, about methodology,
about the meaning of theory and what the appropriate data collection
methods might be.  AI is not a science in this sense of the word.
	Read any standard text on AI: Nilsson or Winston or Rich or
	even the multi-volumned handbook.  Nothing on what it means to
	test a theory, to compare it with others, nothing on what
	constitutes evidence, or with how to conduct experiments.
	Look at any science and you will find lots of books on
	experimental method, on the evaluation of theory.  That is why
	statistics are so important in psychology or biology or
	physics, or why counterexamples are so important in
	linguistics.  Not a word on these issues in AI.
The result is that practitioners of AI have no experience in the
complexity of experimental data, no understanding of scientific
method.  They feel content to argue their points through rhetoric,
example, and the demonstration of programs that mimic behavior thought
to be relevant.  Formal proof methods are used to describe the formal
power of systems, but this rigor in the mathematical analysis is not
matched by any similar rigor of theoretical analysis and evaluation
for the content.

This is why other sciences think that folks in AI are off-the-wall,
uneducated in scientific methodology (the truth is that they are), and
completely incompetent at the doing of science, no matter how
brilliant at the development of mathematics of representation or
formal programming methods.  AI will contribute to the A, but will
not contribute to the I unless and until it becomes a science and
develops an appreciation for the experimental methods of science.  AI
might very well develop its own methods -- I am not trying to argue
that existing methods of existing sciences are necessarily appropriate
-- but at the moment, there is only clever argumentation and proof
through made-up example (the technical expression for this is "thought
experiment" or "gadanken experiment").  Gedanken experiments are not
accepted methods in science: they are simply suggestive for a source
of ideas, not evidence at the end.

don norman

Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa    	{decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu	norman%sdics.ucsd.edu@RELAY.CS.NET

hamscher@ht.ai.mit.EDU.UUCP (07/07/87)

   Date: Fri, 3 Jul 87 07:29:41 pdt
   From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)

I started out writing a message that said this message was
97% true, but that there was an arguable 3%, namely:

   The problem is that most folks in AI think they are scientists * * *

I was going to pick a nit with the word "most".

Then, I remembered that the AAAI-86 Proceedings were
split into a "Science" track and an "Engineering" track,
the former being about half again as thick as the latter...

jlc@goanna.OZ.AU.UUCP (07/08/87)

Don Norman says that AI is not a Science!
Is Mathematics a science or is it not?
No experiments, no comparisons, thus they are not Sciences!
Perhaps both AI and Maths are Arts, ie. creative disciplines.
Both adhere to their own rigour and methods.
Both talk about hypothetical worlds.
Both are used by researchers from other disciplines as tools,
Maths is used to formally describe natural phenomena,
AI is used to construct computable models of these phenomena.

So, where is the problem?
Hmmm, I think some of the AI researchers wander into the
areas of their incompetence and they impose their quasi-theories
on the specialists from other scientific domains. Some of those
quasi-theories are later reworked and adopted by the same specialists.

Is it, then, good or bad? It seems that lack of scientific constraints
may be helpful in advancing knowledge about the principles of science,
it seems that the greatest breakthroughs in Science come from those
who were regarded as unorthodox in their methods.

May be AI is such unorthodox Science, or perhaps an Art.
Let us keep AI this way!

Jacob L. Cybulski

japplega@csm9a.UUCP (Joe Applegate) (07/11/87)

> From jlc@goanna.OZ.AU.UUCP Sat Feb  5 23:28:16 206
>
> May be AI is such unorthodox Science, or perhaps an Art.
> Let us keep AI this way!

I'm not sure there is any maybe about it!  AI development, is in my humble
opinion, the most creative expression of the programmers art.  Any semi-
educated fool can code a program... but the creation of a useful, productivity
enhancing application or system is far more art than science!  The same is
more so in AI development, a query and answer style expert system can be
coded in basic by a high school hacker... but the true application for AI
is in sophisticated applications that employ high quality presentation
techniques that eliminate the ambiguities so often present in a text only
presentation.

One benefit of the advent of the personal computer is the redirection of
software product developent away from data driven environment of DP and
accounting and towards the presentation style environment of the non-DP
professional.  Fortunately, most AI development systems are acknowledging
this trend by providing graphical interfaces.

Art mimics science and the application of science is an art!

    Joe Applegate - Colorado School of Mines Computing Center
            {seismo, hplabs}!hao!isis!csm9a!japplega
                              or
 SYSOP @ M.O.M. AI BBS - (303) 273-3989 - 300/1200/2400 8-N-1 24 hrs.

       *** UNIX is a philosophy, not an operating system ***
 *** BUT it is a registered trademark of AT&T, so get off my back ***
 

sbrunnoc@hawk.CS.ULowell.EDU (Sean Brunnock) (07/29/87)

      Gentlemen, please! (my apologies to any women reading this)

      AI is a very young branch of science. Computer science as a whole
   is only a little more than 40 years old. How can you compare AI with
   mathematics or physics which are thousands of years old?

      Aristotle made some of the first stabs at elemental chemistry and
   gravitation. From our enlightened viewpoint, can we call him a scientist?

      Give it time, its too early to tell.

				   
				   S. Brunnock