[net.ai] Minsky's definition of AI

aarons@cvaxa.UUCP (Aaron Sloman) (10/29/85)

-- Do we need a good definition of AI? -----------------------

Marvin Minsky once defined Artificial Intelligence as '... the
science of making machines do things that would require
intelligence if done by men'.

I don't know if he still likes this, but it is often quoted
with approval, even by at least one recent net-user. A slightly
different definition, similar in spirit but allowing for shifting
standards, is given in the textbook on AI by Elaine Rich
(McGraw-Hill 1983):
    '.. the study of how to make computers do things at which, at
    the moment, people are better.'

There are several problems with these definitions.
 (a) They suggest that AI is primarily a branch of engineering
concerned with making machines do things (though Minsky's use of
the word 'science' hints at a study of general principles).
 (b) Perhaps the main objection is their concern with WHAT is
done rather than HOW it is done. There are lots of things
computers do which would require intelligence if done by people
but which have nothing to do with AI, because there are
unintelligent ways of getting them done if you have enough speed.
E.g. calculators can do complex sums which would require
intelligence if done by people. Even simple sums done by a very
young child would be regarded as an indication of high
intelligence, though not if done by a simple mechanical
calculator. Was building calculators to go faster or be more
accurate than people once AI? For Rich, does it matter in what
way people are currently better?
 (c) Much AI (e.g. work reported at IJCAI) is concerned with
studying general principles in a way that is neutral as to
whether it is used for making new machines or explaining how
existing systems (e.g. people or squirrels) work. For instance,
John McCarthy is said to have coined the term 'Artificial
Intelligence' but it is clear that his work is of this more
general kind, as is much of the work by Minsky and others at MIT.
Many of those who use computers in AI do so merely in order to
test, refine, or demonstrate their theories about how people do
something, or, more profoundly, because only with the aid of
computational concepts can we hope to express theories with rich
enough explanatory power. (Which does not mean that present-day
computational concepts are sufficient.)

For these reasons, the 'Artificial' part of the name is a
misnomer, and 'Cognitive Science' or 'Computational Cognitive
Science' might have been better names. But it is too late to
change the name now, despite the British Alvey Programme's use of
"IKBS" (Intelligent Knowledge Based Systems) instead of "AI"

-- Towards a better definition -------------------------------
Winston, in the second edition of his book on AI (Addison Wesley,
1984) defines AI as 'the study of ideas that enable computers to
be intelligent', but quickly moves on to identify two goals:
    'to make computers more useful'
    'to understand the principles that make intelligence
        possible'.

His second goal captures the spirit of my complaint about the
other definitions. (I made similar points in 'The Computer
Revolution in Philosophy' (Harvester and Humanities Press, 1978;
now out of print)).

All this assumes that we know what intelligence is: and indeed we
can recognise instances even when we cannot define it, as with
many other general concepts, like 'cause' 'mind' 'beauty'
'funniness'. Can we hope to have a study of general principles
concerning X without a reasonably clear definition of X?

Since almost any behaviour can be the product of either an
intelligent system (e.g. using false or incomplete beliefs or
bizarre motives), or an unintelligent system (e.g. an enormously
fast computer using an enormously large look-up table) it is
important to define intelligence in terms of HOW the behaviour is
produced.

-- To kick off discussion here is a suggestion ---------------
Intelligent systems are those which:
 (A) are capable of using structured symbols (e.g. sentences or
states of a network; i.e. not just quantitative measures, like
temperature or concentration of blood sugar) in a variety of
roles including the representation of facts (beliefs),
instructions (motives, desires, intentions, goals), plans,
strategies, selection principles, etc.
 (B) are capable of being productively lazy (i.e. able to use the
information expressed in the symbols in order to achieve goals
with minimal effort).

Although it may not be obvious, various kinds of learning
capabilities can be derived from (B) which is why I have not
included learning as part of the definition, which some would do.
There are many aspects of (A) and (B) which need to be enlarged
and clarified, including the notion of 'effort' and how different
sorts can be minimised, relative to the system's current
capabilities. For instance, there are situations in which the
intelligent (productively lazy) thing to do is develop an
unintelligent but fast and reliable way to do something which has
to be done often. (E.g. learning multiplication tables.)

Given a suitable notion of what an intelligent system is, I would
then define AI as the study of principles relevant to explaining
or designing actual and possible intelligent systems, including
the investigation of both general design requirements and
particular implementation tradeoffs.

The reference to 'actual' systems includes the study of human and
animal intelligence and its underlying principles, and the
reference to 'possible' systems covers principles of engineering
design for new intelligent systems, as well as possible organisms
that might develop one day.

The study of ranges of possibilities (what the limits and
tradeoffs are, how different possibilities are related, how they
can be generated, etc.) is a part of any theoretical
understanding, and good AI MUST be theoretically based. There is
lots of bad AI -- what John McCarthy once referred to as the
'look Ma, no hands' variety.

The definition could be tied more closely to human and animal
intelligence by requiring the ability to cope with multiple
motives in real time, with resource constraints, in an
environment which is partly friendly partly unfriendly. But
probably (B) can be interpreted as including this as a special
case. More generally, it is necessary to say something about the
nature of the goals and the structure of the environment in which
they are to be achieved.

I've probably gone on too long for a net-wide discussion.
Comments welcome.
    Aaron Sloman
-- 
Aaron Sloman, U of Sussex, Cognitive Studies, Brighton, BN1 9QN, England
uucp:...mcvax!ukc!cvaxa!aarons  arpa/janet: aarons%svga@uk.ac.ucl.cs
                                     OR     aarons%svga@ucl-cs

eugene@ames.UUCP (Eugene Miya) (11/03/85)

Interesting posting.
I'm not doing AI work, but I have something to share.

Two weeks ago on the plane down to JPL/Caltech, I read a very interesting
definition of "Intelligence" in the airline's magazine (PSA).
Intelligence is the ability to simultaneously hold two contradictory
thoughts in one's head. I am working on parallelism, and I sort of like
that definition.

From the Rock of Ages Home for Retired Hackers:
--eugene miya
  NASA Ames Research Center
  {hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
  emiya@ames-vmsb

jbn@wdl1.UUCP (11/05/85)

On the definition of intelligence:

     Intelligence is in a sense a matter of degree.  We can show this
by looking at the animal kingdom.  We will assume that normal humans are 
intelligent.  We can then ask:

	Are monkeys intelligent?
	Are dogs intelligent?
	Are horses intelligent?
	Are geese intelligent?
	Are chickens intelligent?

Chickens are generally considered unintelligent, at least by people who
deal with them.  So somewhere in that range is the lower bound of intelligent
life.  Where is it, and why?  Comments?

					John Nagle

al@mot.UUCP (Al Filipski) (11/09/85)

<> 
The idea that intelligence is dependent on processing thoughts (maybe 
contradictory ones) simultaneously seems like an important one. (but, 
on the other hand, maybe it's not :-) Back in the 50's, George
Miller gave much evidence that we can hold about 7 "chunks"
in our short-term memory simultaneously.  Each chunk is often a
pointer to a known concept in our long-term memory.
This seven-chunk cache is a sort of work area for the consciousness.
What if some intelligence could handle 1000 chunks as we handle
the seven? Would it be able to see connections among
things that our minds can't conceive of? Anyway, it seems that
the cache must be expensive if we only have a 7-chunker.
(maybe when the Japanese start making it the price will go down :-)
--------------------------------------------------------------------------
Alan Filipski,  UNIX group,  Motorola Microsystems, Tempe, AZ  U.S.A 85282
seismo!ut-sally!oakhill!mot!al, ihnp4!mot!al, ucbvax!arizona!asuvax!mot!al
--------------------------------------------------------------------------
Complete the following analogy: "aneroid" is to "exculpatory"
as "exegesis" is to ... 

crs@lanl.ARPA (11/10/85)

> On the definition of intelligence:
> 
>      Intelligence is in a sense a matter of degree.  We can show this
> by looking at the animal kingdom.  We will assume that normal humans are 
> intelligent.  We can then ask:
> 
> 	Are monkeys intelligent?
> 	Are dogs intelligent?
> 	Are horses intelligent?
> 	Are geese intelligent?
> 	Are chickens intelligent?
> 
> Chickens are generally considered unintelligent, at least by people who
> deal with them.  So somewhere in that range is the lower bound of intelligent
> life.  Where is it, and why?  Comments?

But are chickens *totally* unintelligent?

I seem to recall reading of chickens being trained to perform simple
tasks.

*Is* there a lower bound on intelligence?

Or it "intelligence" a continuum?  I. e. humans, apes ... chickens ...

At first thought, the question seemed trivial, merely nit-picking.
But is it?  I can envision two views of this "lower bound:"

1.  Let's say, for the sake of our model of intelligence, that there
is a lower bound, beneath which we will consider entities too
unintelligent to be considered for the purposes in which we are
interested.

2.  There *is* a lower bound beneath which entities *are* unintelligent.

I think the latter is limiting and that it may be well to avoid mind
set of this type.  The former is simply the scope limitation that is
used in the construction of all but the simplest models.

My question is, do we want to allow our selves to think, or even to
subconciously feel, that there really is such a lower bound or do we
want to keep firmly in mind that we are working with a *model?*
-- 
All opinions are mine alone...

Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa

hrp@cray.UUCP (Hal Peterson) (11/11/85)

*** REPLACE THIS LINE WITH YOUR MESSAGE ***
> Intelligence is the ability to simultaneously hold two contradictory
> thoughts in one's head.
> --eugene miya

I have never liked that definition.  If one is loose in defining
"thoughts" and "head," it's a very good definition for a large class
of bugs: inconsistent use of representations.  This is certainly an
aspect of human intelligence, but the definition can be fulfilled
without that sort of complexity.
-- 
Hal Peterson / Cray Research / 1440 Northland Dr. / Mendota Hts, MN  55120
	UUCP:  ihnp4!cray!hrp		phone:  (612) 681-3085

crs@lanl.ARPA (11/12/85)

> Or it "intelligence" a continuum?  I. e. humans, apes ... chickens ...

Oops!

Of course that should have read:

Or is "intelligence" ...
   ^^
Sorry 'bout that.
-- 
All opinions are mine alone...

Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa

dave@quest.UUCP (dave) (11/22/85)

> > On the definition of intelligence:
> > 
> >      Intelligence is in a sense a matter of degree.  We can show this
> > by looking at the animal kingdom.  ...
> 
> *Is* there a lower bound on intelligence?
> 
> Or it "intelligence" a continuum?  I. e. humans, apes ... chickens ...
> ...
> My question is, do we want to allow our selves to think, or even to
> subconciously feel, that there really is such a lower bound or do we
> want to keep firmly in mind that we are working with a *model?*
> -- 
> Charlie Sorsby

It is interesting that many people seem to define "intelligence"
as "what human beings do".  Many times I have heard arguements
that basically come down to:  "X isn't a human being, therefore
it isn't intelligent."
-- 

David Messer   UUCP:  ...ihnp4!quest!dave
                      ...ihnp4!encore!vaxine!spark!14!415!sysop
               FIDO:  14/415 (SYSOP)

@gatech.UUCP (11/25/85)

---------------------Reply to mail dated 22-NOV-1985 12:28---------------------

>It is interesting that many people seem to define "intelligence"
>as "what human beings do".  Many times I have heard arguements
>that basically come down to:  "X isn't a human being, therefore
>it isn't intelligent."
>-- 
> 
>David Messer   UUCP:  ...ihnp4!quest!dave
>                      ...ihnp4!encore!vaxine!spark!14!415!sysop
>               FIDO:  14/415 (SYSOP)

I agree that saying human beings are the ultimate definition of intelligence
is a crock.  My question is what is (are) the neccesary functions for
intelligence?

    I feel that for a being to be intelligent it must be able to have some
sort of sensory perception such as sight, hearing or touch.  Also I feel that
the being must have some concept of ego.


     Any comments?


Brian Mahoney

"Thinking is an eternal problem.  
 What is the problem, you ask?
 Man has never done it."