[net.ai] Inscrutable Intelligence

BATALI%MIT-OZ@MIT-MC.ARPA (11/03/83)

isms constructed in accord
with the account will constitute evidence that the account is correct.
(This is where the Turing test comes in, not as a definition of
intelligence, but as evidence for its presence.)

MINSKY%MIT-OZ@MIT-MC.ARPA (11/04/83)

Sure.  I agree you want an account of what intelligence is "about".
When I complained about making a "definition" I meant
one of those useless compact thingies in dictionaries.

But I don't agree that you need this for scientific motivation.
Batali: do you really think Biologists need definitions of Life
for such purposes?

Finally, I simply don't think this is a compact phenomenon.
Any such "account", if brief, will be very partial and incomplete.
To expect a test to show that "the account is correct" depends
on the nature of the partial theory.  In a nutshell, I still
don't see any use at all for
such definition, and it will lead to calling all sorts of
partial things "intelligence".  The kinds of accounts to confirm
are things like partial theories that need their own names, like

   heuristic search method
   credit-assignment scheme
   knowledge-representation scheme, etc.

As in biology, we simply are much too far along to be so childish as
to say "this program is intelligent" and "this one is not".  How often
do you see a biologist do an experiment and then announce "See, this
is the secret of Life".  No.  He says, "this shows that enzyme
FOO is involved in degrading substrate BAR".

ISAACSON%USC-ISI@sri-unix.UUCP (11/04/83)

I think that your message was really addressed to Minsky, who
already replied.

I also think that the most one can hope for are confirmations of
"partial theories" relating, respectively, to various aspects
underlying phenomena of "intelligence".  Note that I say
"phenomena" (plural).  Namely, we may have on our hands a broad
spectrum of "intelligences", each one of which the manifestation
of somewhat *different* mix of underlying ingredients.  In fact,
for some time now I feel that AI should really stand for the
study of Artificial Intelligences (plural) and not merely
Artificial Intelligence (singular).

BATALI%MIT-OZ@MIT-MC.ARPA (11/04/83)

nd it will lead to calling all sorts of
    partial things "intelligence".

If the account is partial and incomplete, and leads to calling partial
things intelligence, then the account must be improved or rejected.
I'm not claiming that an account must be short, just that we need
one.

    The kinds of accounts to confirm
    are things like partial theories that need their own names, like

       heuristic search method
       credit-assignment scheme
       knowledge-representation scheme, etc.


But why are these thing interesting?  Why is heuristic search better
than "blind" search?  Why need we assign credit?  Etc?  My answer:
because such things are the "right" thing to do for a program to be
intelligent.  This answer appeals to a pre-theoretic conception of
what intelligence is.   A more precise notion would help us
assess the relevance of these and other methods to AI.

One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful in
making a program intelligent.  If we could say "do that" to a program
while pointing to the definition, and if it "did that", we would have
an intelligent program.  But I am far too optimistic.  (Perhaps
"childishly" so).

MINSKY%MIT-OZ@MIT-MC.ARPA (11/04/83)

     One potential reason to make a more precise "definition" of
     intelligence is that such a definition might actually be useful
     in making a program intelligent.  If we could say "do that" to a
     program while pointing to the definition, and if it "did that",
     we would have an intelligent program.  But I am far too
     optimistic.

I think so.  You keep repeating how good it would be to have a good
definition of intelligence and I keep saying it would be as useless as
the biologists' search for the definition of "life".  Evidently
we're talking past each other so it's time to quit.

Last word: my reason for making the argument was that I have seen
absolutely no shred of good ideas in this forum, apparently because of
this definitional orientation.  I admit the possibility that some
good mathematical insight could emerge from such discussions.  But
I am personally sure it won't, in this particular area.

DJC%MIT-OZ@MIT-MC.ARPA (11/04/83)

From:  Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>

There's a wonderful quote from Wittgenstein that goes something like:

  One of the most fundamental sources of philosophical bewilderment is to have
  a substantive but be unable to find the thing that corresponds to it.

Perhaps the conclusion from all this is that AI is an unfortunate name for the
enterprise, since no clear definitions for I are available.  That shouldn't
make it seem any less flakey than, say, "operations research" or "management
science" or "industrial engineering" etc. etc.  People outside a research area
care little what it is called; what it has done and is likely to do is
paramount.

Trying to find the ultimate definition for field-naming terms is a wonderful,
stimulating philosophical enterprise.  However, one can make an empirical
argument that this activity has little impact on technical progress.

JCMA%MIT-AI@sri-unix.UUCP (11/05/83)

 the normal
science phase.

    However, one can make an empirical argument that this activity has little
    impact on technical progress.

Let's see your empirical argument.  I haven't noticed any intelligent machines
running around the AI lab lately.  I certainly haven't noticed any that can
carry on any sort of reasonable conversation.  Have you?  So, where is all
this technical progress regarding understanding intelligence?

Make sure you don't fall into the trap of thinking that intelligent machines
are here today (Douglas Hofstadter debunks this position in his "Artificial
Intelligence: Subcognition as Computation," CS Dept., Indiana U., Nov. 1982).

JK%SU-AI@sri-unix.UUCP (11/05/83)

From:  Jussi Ketonen <JK@SU-AI>

On useless discussions - one more quote by Wittgenstein:
        Wovon man nicht sprachen kann, darueber muss man schweigen.

unbent@ecsvax.UUCP (11/07/83)

I sympathize with the longing for an "operational definition" of
'intelligence'--especially since you've got to write *something* on
grant applications to justify all those hardware costs.  (That's not a
problem we philosophers have.  Sigh!)  But I don't see any reason to
suppose that you're ever going to *get* one, nor, in the end, that you
really *need* one.

You're probably not going to get one because "intelligence" is
one of those "open textury", "clustery" kinds of notions.  That is,
we know it when we see it (most of the time), but there are no necessary and
sufficient conditions that one can give in advance which instances of it
must satisfy.  (This isn't an uncommon phenomenon.  As my colleague Paul Ziff
once pointed out, when we say "A cheetah can outrun a man", we can recognize
that races between men and *lame* cheetahs, *hobbled* cheetahs, *three-legged*
cheetahs, cheetahs *running on ice*, etc. don't count as counterexamples to the
claim even if the man wins--when such cases are brought up.  But we can't give
an exhaustive list of spurious counterexamples *in advance*.)

Why not rest content with saying that the object of the game is to get
computers to be able to do some of the things that *we* can do--e.g.,
recognize patterns, get a high score on the Miller Analogies Test,
carry on an interesting conversation?  What one would like to say, I
know, is "do some of the things we do *the way we do them*--but the
problem there is that we have no very good idea *how* we do them.  Maybe
if we can get a computer to do some of them, we'll get some ideas about
us--although I'm skeptical about that, too.

                        --Jay Rosenberg (ecsvax!unbent)

asa@rayssd.UUCP (11/09/83)

The problem with a psychological definition of intelligence is in finding
some way to make it different from what animals do, and cover all of the
complex things that huumans can do. It used to be measured by written
test. This was grossly unfair, so visual tests were added. These tend to
be grossly unfair because of cultural bias. Dolphins can do very
"intelligent" things, based on types of "intelligent behavior". The best
definition might be based on the rate at which learning occurs, as some
have suggested, but that is also an oversimplification. The ability to
deduce cause and effect, and to predict effects is obviously also
important. My own feeling is that it has something to do with the ability
to build a model of yourself and modify yourself accordingly. It may
be that "I conceive" (not "I think"), or "I conceive and act", or "I
conceive of conceiving" may be as close as we can get.

mac@uvacs.UUCP (11/10/83)

Regarding inscrutability of intelligence [sri-arpa.13363]:

Actually, it's typical that a discipline can't define its basic object of
study.  Ever heard a satisfactory definition of mathematics (it's not just
the consequences of set theory) or philosophy.?  What is physics?

Disciplines are distinguished from each other for historical and
methodological reasons.  When they can define their subject precisely it is
because they have been superseded by the discipline that defines their
terms.

It's usually not important (or possible) to define e.g. intelligence
precisely.  We know it in humans.  This is where the IQ tests run into
trouble.  AI seems to be about behavior in computers that would be called
intelligent in humans.  Whether the machines are or are not intelligent
(or, for that matter, conscious) is of little interest and no import.  In
this I guess I agree with Rorty [sri-arpa.13322].  Rorty is willing to
grant consciousness to thermostats if it's of any help.

(Best definition of formal mathematics I know: "The science where you don't
know what you're talking about or whether what you're saying is true".)

			A. Colvin
			mac@virginia

jcma%MIT-MC@sri-unix.UUCP (11/10/83)

 the degree to which a program exhibits
"intelligence."

If you were being asked to spend $millions on a field of inquiry, wouldn't you
find it strange (bordering on absurd) that the principle proponents couldn't
render an operational definition of the object of investigation?

p.s.  I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?).  So, if worst comes to worst, AI can just
borrow psychology's definition and improve on it.