[comp.ai.digest] Seminar - On the Threshold of Knowledge

ELIZABETH@OZ.AI.MIT.EDU.UUCP (11/09/87)

                           NE43, 8TH FLOOR
                         THUR, 11/12, 4:00PM

                    ON THE THRESHOLD OF KNOWLEDGE
                       The Case for Inelegance

                         Dr. Douglas B. Lenat
                       Principal Scientist, MCC


In this talk, I would like to present a surprisingly compact, powerful,
elegant set of reasoning methods that form a set of first principles
which explain creativity, humor, and common sense reasoning -- a sort of
"Maxwell's Equations" of Thought.  I'd like very much to present them,
but, sadly, I don't believe they exist.  So, instead, I'll tell you what
I've been working on down in Texas for the last three years.

Intelligent behavior, especially in unexpected situations, requires
being able to fall back on general knowledge, and being able to
analogize to specific but far-flung knowledge.  As Marvin Minsky said,
"the more we know, the more we can learn".

Unfortunately, the flip side of that comes into play every time we build
and run a program that doesn't know too much to begin with, especially
for tasks like semantic disambiguation of sentences, or open-ended
learning by analogy.  So-called expert systems finesse this by
restricting their tasks so much that they can perform relatively narrow
symbol manipulations which nevertheless are interpreted meaningfully
(and, I admit, usefully) by human users.  But such systems are
hopelessly brittle:  they do not cope well with novelty, nor do they
communicate well with each other.

OK, so the mattress in the road to AI is Lack of Knowledge, and the
anti-mattress is Knowledge.  But how much does a program need to know,
to begin with?  The annoying, inelegant, but apparently true answer is:
a non-trivial fraction of consensus reality -- the few million things
that we all know, and that we assume everyone else knows.  If I liken
the Stock Market to a roller-coaster, and you don't know what I mean, I
might liken it to a seesaw, or to a steel spring.  If you still don't
know what I mean, I probably won't want to deal with you anymore.

It will take about two person-centuries to build up that KB, assuming
that we don't get stuck too badly on representation thorns along the
way.  CYC -- my 1984-1994 project at MCC -- is an attempt to build that
KB.  We've gotten pretty far along already, and I figured it's time I
shared our progress, and our problems, with "the lab."  Some of the
interesting issues are: how we decide what knowledge to encode, and how
we encode it; how we represent substances, parts, time, space, belief,
and counterfactuals; how CYC can access, compute, inherit, deduce, or
guess answers; how it computes and maintains plausibility (a sibling of
truth maintenance); and how we're going to squeeze two person-centuries
into the coming seven years, without having the knowledge enterers'
semantics "diverge".