[net.ai] Notes on AAAI '86

kort@hounx.UUCP (B.KORT) (08/21/86)

                              Notes on AAAI

                                Barry Kort


                                 Abstract

       The Fifth Annual AAAI Conference on Artificial Intelligence
       was held August 11-15 at the Philadelphia Civic Center.

       These notes record the author's personal impressions of the
       state of AI, and the business prospects for AI technology.
       The views expressed are those of the author and do not
       necessarily reflect the perspective or intentions of other
       individuals or organizations.

                                  * * *


       The American Association for Artificial Intelligence held
       its Fifth Annual Conference during the week of August 11,
       1986, at the Philadelphia Civic Center.

       Approximately 5000 attendees were treated to the latest
       results of this fast growing field.  An extensive program of
       tutorials enabled the naive beginner and technical-
       professional alike to rise to a common baseline of
       understanding. Research and Science Sessions concentrated on
       the theoretical underpinnings, while the complementary
       Engineering Sessions focused on reduction of theory to
       practice.

       Dr. Herbert Schorr of IBM delivered the Keynote Address.
       His message was simple and straightforward:  AI is here
       today, it's real, and it works.  The exhibit floor was a sea
       of high-end workstations, running flashy applications
       ranging from CAT scan imagery to automated fault diagnosis,
       to automated reasoning, to 3-D scene animation, to
       iconographic model-based reasoning.  Symbolics, TI, Xerox,
       Digital, HP, Sun, and other vendors exhibited state of the
       art hardware, while Intellicorp, Teknowledge, Inference,
       Carnegie-Mellon Group, and other software houses offered
       knowledge engineering power tools that make short work of
       automated reasoning.

       Knowledge representation schema include the ubiquitous tree,
       as well as animated iconographic models of dynamic systems.
       Inductive and deductive reasoning and goal-directed logic
       appear in the guise of forward and backward chaining
       algorithms which seek the desired chain of nodes linking
       premiss to predicted conclusion or hypothesis to observed
       symptoms.  Such schema are especially well adapted to
       diagnosis of ills, be it human ailment or machine
       malfunction.

       Natural Language understanding remains a hard problem, due
       to the inscrutable ambiguity of most human-generated
       utterances.  Nevertheless, silicon can diagram sentences as
       well as a precocious fifth grader.  In limited domain
       vocabularies, the semantic content of such diagrammatic
       representations can be reliably extracted.

       Robotics and vision remain challenging fields, but advances
       in parallel architectures may clear the way for notable
       progress in scene recognition.

       Qualitative reasoning, model-based reasoning, and reasoning
       by analogy still require substantial human guidance, perhaps
       because of the difficulty of implementing the interdomain
       pattern recognition which humans know as analogy, metaphor,
       and parable.

       Interesting philosophical questions abound when AI moves
       into the fields of automated advisors and agents.  Such
       systems require the introduction of Value Systems, which may
       or may not conflict with individual preferences for
       benevolent ethics or hard-nosed business pragmatics. One
       speaker chose the provocative title, "Can Machines Be
       Intelligent If They Don't Give a Damn?"  We may be on the
       threshold of Artificial Intelligence, but we have a long way
       to go before we arrive at Artificial Wisdom.  Nevertheless,
       some progress is being made in reducing to practice such
       esoteric concepts as Theories of Equity and Justice, leading
       to the possibility of unbiased Jurisprudence.

       AI goes hand in hand with Theories of Learning and
       Instruction, and the field appears to be paying dividends in
       the art and practice of knowledge exchange, following the
       strategy first suggested by Socrates some 2500 years ago.
       The dialogue format abounds, and mixed initiative dialogues
       seem to capture the essence of mutual teaching and
       mirroring.  Perhaps sanity can be turned into an art form
       and a science.

       Belief Revision and Truth Maintenance enable systems to
       unravel confusion caused by the injection of mutually
       inconsistent inputs.  Nobody's fool, these systems let the
       user know that there's a fib in there somewhere.

       Psychology of computers becomes an issue, and the Silicon
       Syndrome of Neuroses can be detected whenever the machines
       are not taught how to think straight.  Machines are already
       sapient.  Soon they will acquire sentience, and maybe even
       free will (nothing more than a random number generator
       coupled with a value system).  Perhaps by the end of the
       Millenium (just 14 years away), the planet will see its
       first Artificial Sentient Being.  Perhaps Von Neumann knew
       what he was talking about when he wrote his cryptic volume
       entitled, On the Theory of Self-Reproducing Automata.

       There were no Cybernauts in Philadelphia this year, but many
       of the piece parts were in evidence.  Perhaps it is just a
       matter of time until the Golem takes its first step.

       In the mean time, we have entered the era of the Competent
       System, somewhat short on world-class expertise, but able to
       hold it's own in today's corporate culture.  It learns about
       as fast as its human counterpart, and is infinitely
       clonable.

       Once upon a time it was felt that machines should work and
       people should think.  Now that machines can think, perhaps
       people can take more time to enjoy the state of being called
       Life.


                                  * * *



       Lincroft, NJ
       August 17, 1986

hsgj@batcomputer.TN.CORNELL.EDU (Mr. Barbecue) (08/29/86)

(not really a followup article, more of a commentary)

I find it very interesting that there is so much excitement generated over
parallel processing computer systems by the AI community.  Interesting in
that the problems of AI (the intractability of: language, vision, and general
cognition to name a few) are not anywhere near limited by computational
power but by our lack of understanding.  If somebody managed to create a
truely intelligent system, I think we would have heard about it by now,
even if the program took a month to run.  Fact of the matter is that our
knowledge of such problems is minimal.  Attempts to solve them leads to
researchers banging their heads against a very hard wall, indeed.  So what
is happening?  The field that was once A.I. is very quickly headed back to
it's origins in computer science and is producing "Expert Systems" by the
droves.  The problem isn't that they aren't useful, but rather that they
are being touted as the A.I., and true insights into actual human thinking
are still rare (if not non-existant).

Has everybody given up?  I doubt it.  However, it seems that economic reality
has set in.  People are forced to show practical systems with everyday appli-
cations.  Financers can't understand why we would be overjoyed if we could
develop a system that learns like a baby, and so all the money is being
siphoned away and into robotics, Expert Systems, and even spelling checkers!
(no, I don't think that welding cars together requires a great deal of true
intelligence, though technically it may be great)

So what is one to do?  Go into cog-psych?  At least psychologists are working
on the fundamental problems that AI started, but many seem to be grasping at
straws, trying to find a simple solution (i.e., familly resemblance, primary
attribute analysis, etc.)

What seems to be lacking is a cogent combination of theories.  Some attempts
have been made, but these authors basically punt on the issue, stating 
like "none of the above theories adequately explain the observed phenomena,
perhaps the solution is a combination of current hypothesis".  Very good, now
lets do that research and see if this is true!

My opinion?  Well, some current work has dealt with computer nervous systems,
(Science --sometime this summer).  This is similar in form to the hypercube
systems but the theory seems different.  Really the work is towards computer
neurons.  Distributed systems in which each element contributes a little to
the final result.  Signals are not binary, but graded.  They combine with other 
signals from various sources and form an output.  Again, this could be done
with a linear machine that hold partial results.  But, I'm not suggesting that
this alone is a solution, it's just interesting.  My real opinion is that
without "bringing baby up" so to speak, we won't get much accomplished.  The
ultimate system will have to be able to reach out, grasp (whether visually or
physically, or whatever) and sense it's world around it in a rich manner.  It
will have to be malleable, but still have certain guidelines built in.  It
must truely learn, forming a myriad of connections with past experiences and
thoughts.  In sum, it will have to be a living animal (though made of sand..)

Yes, I do think that you need the full range of systems to create a truely
intelligent system.  Hellen Keller still had touch.  She could feel vibrations,
and she could use this information to create a world that was probably 
perceptually much different than ours.  But, she had true intelligence.
(I realize that the semantics of all these words and phrases are highly
debated, you know what I'm talking, so don't try to be difficult!)  :)

Well, that's enough for a day.

Ted Inoue.
Cornell







-- 
ARPA:  hsgj%vax2.ccs.cornell.edu@cu-arpa.cs.cornell.edu
UUCP:  ihnp4!cornell!batcomputer!hsgj   BITNET:  hsgj@cornella

kort@hounx.UUCP (B.KORT) (09/01/86)

I appreciated Ted Inoue's commentary on the State of AI.  I especially
agree with his point that a cogent combination of theories is needed.
My own betting card favors the theories of Piaget on learning, coupled
with the modern animated-graphic mixed-initiative dialogues that merge
the Socratic-style dialectic with inexpensive PC's.  See for instance
the Mind Mirror by Electronic Arts.  It's a flashy example of the clever
integration of Cognitive Psychology, Mixed Initiative Dialogues, Color
Animated Graphics, and the Software/Mindware Exchange.  Such illustrations
of the imagery in the Mind's Eye can breathe new life into the relationship
between silicon systems and their carbon-based friends.

Barry Kort
hounx!kort