[net.ai] defining AI, AI research methodology, jargon in AI

dyer%UCLA-CS@sri-unix.UUCP (11/29/83)

From:  Michael Dyer <dyer@UCLA-CS>

This is in three flaming parts:   (I'll probably never get up the steam to
respond again,  so I'd better get it all out at once.)

Part I.  "Defining intelligence", "defining AI" and/or "responding to AI
challenges" considered harmful:  (enough!)

Recently, I've started avoiding/ignoring AIList since, for the most
part, it's been a endless discussion on "defining A/I" (or, most
recently) defending AI.  If I spent my time trying to "define/defend"
AI or intelligence, I'd get nothing done.  Instead, I spend my time
trying to figure out how to get computers to achieve some task -- exhibit
some behavior -- which might be called intelligent or human-like.
If/whenever I'm partially successful, I try to keep track about what's
systematic or insightful.  Both failure points and partial success
points serve as guides for future directions.  I don't spend my time
trying to "define" intelligence by BS-ing about it.  The ENTIRE
enterprise of AI is the attempt to define intelligence.

Here's a positive suggestion for all you AIList-ers out there:

I'd be nice to see more discussion of SPECIFIC programs/cognitive
models:  their Assumptions, their failures, ways to patch them, etc. --
along with contentful/critical/useful suggestions/reactions.

Personally, I find Prolog Digest much more worthwhile.  The discussions
are sometimes low level, but they almost always address specific issues,
with people often offering specific problems, code, algorithms, and
analyses of them.  I'm afraid AIList has been taken over by people who
spend so much time exchanging philosophical discussions that they've
chased away others who are very busy doing research and have a low BS
tolerance level.

Of course, if the BS is reduced, that means that the real AI world will
have to make up the slack.  But a less frequent digest with real content
would be a big improvement.  {This won't make me popular, but perhaps part
of the problem is that most of the contributors seem to be people who
are not actually doing AI, but who are just vaguely interested in it, so
their speculations are ill-informed and indulgent.  There is a use for
this kind of thing, but an AI digest should really be discussing
research issues.  This gets back to the original problem with this
digest -- i.e. that researchers are not using it to address specific
research issues which arise in their work.}

Anyway, here are some examples of task/domains topic that could be
addressed.  Each can be considered to be of the form:  "How could we get
a computer to do X":

          Model Dear Abby.
          Understand/engage in an argument.
          Read an editorial and summarize/answer questions about it.
          Build a daydreamer
          Give legal advice.
          Write a science fiction short story
               ...

{I'm an NLP/Cognitive modeling person -- that's why my list may look
bizzare to some people}

You researchers in robotics/vision/etc.  could discuss, say, how to build
a robot that can:

          climb stairs
             ...
          recognize a moving object
             ...
          etc.

People who participate in this digest are urged to:  (1) select a
task/domain, (2) propose a SPECIFIC example which represents
PROTOTYPICAL problems in that task/domain, (3) explain (if needed) why
that specific example is prototypic of a class of problems, (4) propose
a (most likely partial) solution (with code, if at that stage), and 4)
solicit contentful, critical, useful, helpful reactions.

This is the way Prolog.digest is currently functioning, except at the
programming language level.  AIList could serve a useful purpose if it
were composed of ongoing research discussions about SPECIFIC, EXEMPLARY
problems, along with approaches, their limitations, etc.

If people don't think a particular problem is the right one, then they
could argue about THAT.  Either way, it would elevate the level of
discussion.  Most of my students tell me that they no longer read
AIList.  They're turned off by the constant attempts to "defend or
define AI".

Part II.  Reply to R-Johnson

Some of R-Johnson's criticisms of AI seem to stem from viewing
AI strictly as a TOOLS-oriented science.

{I prefer to refer to STRUCTURE-oriented work (ie content-free) as
TOOLS-oriented work and CONTENT-oriented work as DOMAIN or
PROCESS-oriented.  I'm referring to the distinction that was brought up
by Schank in "The Great Debate" with McCarthy at AAAI-83 Wash DC).

In general,  tools-oriented work seems more popular and accepted
than content/domain-oriented work.  I think this is because:

     1.  Tools are domain independent, so everyone can talk about them
     without having to know a specific domain -- kind of like bathroom
     humor being more universally communicable than topical-political
     humor.

     2.  Tools have nice properties:  they're general (see #1 above);
     they have weak semantics (e.g. 1st order logic, lambda-calculus)
     so they're clean and relatively easy to understand.

     3.  No one who works on a tool need be worried about being accused
     of "ad hocness".

     4.  Breakthroughs in tools-research happen rarely,  but when it
     does,  the people associated with the breakthrough become
     instantly famous because everyone can use their tool (e.g. Prolog)

In contrast, content or domain-oriented research and theories suffer
from the following ills:

     1.  They're "ad hoc" (i.e.  referring to THIS specific thing or
     other).

     2.  They have very complicated semantics,  poorly understood,
     hard to extend, fragile, etc. etc.

However,  many of the most interesting problems pop up in trying
to solve a specific problem which, if solved,  would yield insight
into intelligence.  Tools, for the most part, are neutral with respect
to content-oriented research questions.  What does Prolog or Lisp
have to say to me about building a "Dear Abby" natural language
understanding and personal advice-giving program?  Not much.
The semantics of lisp or prolog says little about the semantics of the
programs which researchers are trying to discover/write in Prolog or Lisp.
Tools are tools.  You take the best ones off the shelf you can find for
the task at hand.  I love tools and keep an eye out for
tools-developments with as much interest as anyone else.  But I don't
fool myself into thinking that the availability of a tool will solve my
research problems.

{Of course no theory is exlusively one or the other.  Also, there are
LEVELS of tools & content for each theory.  This levels aspect causes
great confusion.}

By and large, AIList discussions (when they get around to something
specific) center too much around TOOLS and not PROCESS MODELS (ie
SPECIFIC programs, predicates, rules, memory organizations, knowledge
constructs, etc.).

What distinguishes AI from compilers, OS, networking, or other aspects
of CS are the TASKS that AI-ers choose.  I want computers that can read
"War and Peace" -- what problems have to be solved, and in what order,
to achieve this goal?  Telling me "use logic" is like telling me
to "use lambda calculus" or "use production rules".

Part III.   Use and abuse of jargon in AI.

Someone recently commented in this digest on the abuse of jargon in AI.
Since I'm from the Yale school, and since Yale commonly gets accused of
this, I'm going to say a few words about jargon.

Different jargon for the same tools is BAD policy.  Different jargon
to distinguish tools from content is GOOD policy.  What if Schank
had talked about "logic"  instead of "Conceptual Dependencies"?
What a mistake that would have been!  Schank was trying to specify
how specific meanings (about human actions) combine during story
comprehension.  The fact that prolog could be used as a tool to
implement Schank's conceptual dependencies is neutral with respect
to what Schank was trying to do.

At IJCAI-83  I heard a paper (exercise for the reader to find it)
which went something like this:

     The work of Dyer (and others) has too many made-up constructs.
     There are affects, object primitives, goals, plans, scripts,
     settings, themes, roles, etc.  All this terminology is confusing
     and unnecessary.

     But if we look at every knowledge construct as a schema (frame,
     whatever term you want here), then we can describe the problem much
     more elegantly.  All we have to consider are the problems of:
     frame activation, frame deactivation, frame instantiation, frame
     updating, etc.

Here, clearly we have a tools/content distinction.  Wherever
possible I actually implemented everything using something like
frames-with-procedural-attachment (ie demons).  I did it so that I
wouldn't have to change my code all the time.  My real interest,
however, was at the CONTENT level.  Is a setting the same as an emotion?
Does the task:  "Recall the last 5 restaurant you were at" evoke the
same search strategies as "Recall the last 5 times you accomplished x",
or "the last 5 times you felt gratitude."?  Clearly, some classes of
frames are connected up to other classes of frames in different ways.
It would be nice if we could discover the relevant classes and it's
helpful to give them names (ie jargon).  For example, it turns out that
many (but not all) emotions can be represented in terms of abstract goal
situations.  Other emotions fall into a completely different class (e.g.
religious awe, admiration).  In my program "love" was NOT treated as
(at the content level) an affect.

When I was at Yale, at least once a year some tools-oriented person
would come through and give a talk of the form:  "I can
represent/implement your Scripts/Conceptual-Dependency/
Themes/MOPs/what-have-you using my tool X" (where X = ATNs, Horn
clauses, etc.).

I noticed that first-year students usually liked such talks, but the
advanced students found them boring and pointless.  Why?  Because if
you're content-oriented you're trying to answer a different set of
questions, and discussion of the form:  "I can do what you've already
published in the literature using Prolog" simply means "consider Prolog
as a nice tool" but says nothing at the content level, which is usually
where the advanced students are doing their research.

I guess I'm done.  That'll keep me for a year.

                                                  -- Michael Dyer

lum@osu-dbs.UUCP (12/05/83)

Perhaps Dyer is right.  Perhaps it would be a good thing to split net.ai/AIList
into two groups, net.ai and net.ai.d, ala net.jokes and net.jokes.d.  In one
the AI researchers could discuss actual AI problems, and in the other, philo-
sophers could discuss the social ramifications of AI, etc.  Take your pick.

Lum Johnson (cbosgd!osu-dbs!lum)