[comp.ai] Doug Lenat's "cyc" project

doug@nixtdc.uucp (Doug Moen) (07/31/90)

The latest issue of "Discover" magazine has an article
on Doug Lenat's "cyc" project, which is nothing less
than a frame-based database of all of the common-place
knowledge of the world that a typical human adult would
have.  Apparently, they have been working on it for 6 years,
and there are several people who work full time adding new
knowledge to the database.  There is also a natural language
interface.  The article gives some short conversations
between cyc and a human that remind me of a bright
4 year old child asking questions of its parents.  In other
words, the article conveys the impression that the first
real artificial intelligence is in the process of being
constructed.  Lenat says it will be able to read and understand
English text in 5 years, at which point the rate at which
it can be educated will greatly increase.  Other authorities
apparently feel he is too optimistic.

On the other hand, the Hacker's Dictionary defines the
Lenat as the standard unit of bogosity, and goes on to
say that the micro-Lenat is the largest unit useable
for most applications.

Does anyone familiar with "cyc" want to comment?
My feeling is that someone is going to build an artificial
mind sooner or later, and cyc is a plausible first step.

Doug Moen.

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (07/31/90)

In article <1990Jul31.034417.19350@nixtdc.uucp> doug@nixtdc.UUCP (Doug Moen) writes:
>Does anyone familiar with "cyc" want to comment?
>My feeling is that someone is going to build an artificial
>mind sooner or later, and cyc is a plausible first step.

I heard a short presentation from the head of MCC about CYC (MCC is the
company/research consortium in Austin, TX building CYC).  It didn't sound
to me like an "intelligence", but an expert system (written in LISP) that
contained vast amounts of common-sense knowledge about the world.  This
includes all that interrelated stuff that has made it so hard to make
expert systems, etc., understand what a normal human is talking about.

By the way, where can I get a copy of the "Hackers Dictionary"??

- Jim Ruehlin

damon@upba.UUCP (08/01/90)

> Does anyone familiar with "cyc" want to comment?
> My feeling is that someone is going to build an artificial
> mind sooner or later, and cyc is a plausible first step.

Cyc is described in pretty good detail in the book "Building Large
Knowledge-Based Systems; Representation and Inference in the
Cyc Project" by Douglas B. Lenat and R. V. Guha.  Published
by Addison-Wesley.  I picked my copy up in the local university
book store.

Cyc is an attempt to build a large knowledge base of "common sense"
information.  A seperate project is attempting to build a
natural language interface which can draw on the knowledge
base to resolve ambiguity.  I believe the current interface
project members are using is a windowed "knowledge editor".

Damon Scaggs
uunet!frith!upba!damon

erich@eecs.cs.pdx.edu (Erich Boleyn) (08/01/90)

   There was a report published about "cyc" in book form called:

	"Building Large Knowledge Systems:
		Representation and Inference in the Cyc Project"

   by Douglas Lenat and R. Guha,

	published by Addison-Wesley   (Co. 1990)


   I have yet to read this myself, but it looks interesting (I wish I
had more time...).

   Hope this helps,

   Erich

   ___--Erich S. Boleyn--___  CSNET/INTERNET:  erich@cs.pdx.edu
  {Portland State University}     ARPANET:     erich%cs.pdx.edu@relay.cs.net
       "A year spent in           BITNET:      a0eb@psuorvm.bitnet
      artificial intelligence is enough to make one believe in God"

cam@aipna.ed.ac.uk (Chris Malcolm) (08/04/90)

In article <1990Jul31.034417.19350@nixtdc.uucp> doug@nixtdc.UUCP (Doug Moen) writes:

>The latest issue of "Discover" magazine has an article
>on Doug Lenat's "cyc" project, which is nothing less
>than a frame-based database of all of the common-place
>knowledge of the world that a typical human adult would
>have.  

>My feeling is that someone is going to build an artificial
>mind sooner or later, and cyc is a plausible first step.

I presume you agree that apes, cats, and dogs have minds? Perhaps even
that beetles have minds? If so, how on earth is a program which can
mimic one thing which _only_ humans can do a plausible first step??


-- 
Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK

lantz@Apple.COM (Bob Lantz) (08/04/90)

cam@aipna.ed.ac.uk (Chris Malcolm) writes:

>In article <1990Jul31.034417.19350@nixtdc.uucp> doug@nixtdc.UUCP (Doug Moen) writes:

>>My feeling is that someone is going to build an artificial
>>mind sooner or later, and cyc is a plausible first step.

>I presume you agree that apes, cats, and dogs have minds? Perhaps even
>that beetles have minds? If so, how on earth is a program which can
>mimic one thing which _only_ humans can do a plausible first step??

Nit-picking.  While it is certainly *possible* to represent the
knowledge and common-sense reasoning of cats, dogs, etc.., it's
pretty uninteresting.  The Cyc folks have solved many difficult
problems of knowledge representation and common-sense reasoning,
and their knowledge-base is easily translated into a straightforward
declarative form, with reasonable semantics, which could easily
interoperate with another system which required a large base of
common-sense knowledge.  Additionally, they have created a flexible,
generalized framework which is designed to support not only common-
sense knowledge and reasoning, but to integrate more specialized
systems.

One of the most compelling reasons for representing human knowledge
and reasoning is that it is needed for any system which expects to
work in natural (human) language.  In order for language to be
translated into a reasonable semantic domain, you need to have that
domain.  Cyc (or systems like it) are intended to represent the semantic
objects of natural language.

>-- 
>Chris Malcolm    cam@uk.ac.ed.aipna   031 667 1011 x2550
>Department of Artificial Intelligence, Edinburgh University
>5 Forrest Hill, Edinburgh, EH1 2QL, UK

Bob Lantz

pluto@zaius.ucsd.edu (Mark Plutowski) (08/05/90)

In article <43632@apple.Apple.COM> lantz@Apple.COM (Bob Lantz) writes:
>cam@aipna.ed.ac.uk (Chris Malcolm) writes:
>>In article <1990Jul31.034417.19350@nixtdc.uucp> doug@nixtdc.UUCP (Doug Moen) writes:
MOEN: My feeling is that someone is going to build an artificial
MOEN: mind sooner or later, and cyc is a plausible first step.

MALCOLM: I presume you agree that apes, cats, and dogs have minds? Perhaps even
MALCOLM: that beetles have minds? If so, how on earth is a program which can
MALCOLM: mimic one thing which _only_ humans can do a plausible first step??

LANTZ: Nit-picking.  While it is certainly *possible* to represent the
LANTZ: knowledge and common-sense reasoning of cats, dogs, etc.., it's
LANTZ: pretty uninteresting.  ...

Not to take away anything from the Cyc project, which may be an 
important step for AI, I must agree with Mr. Malcolm here, and
in particular, take exception to your immediately preceding remark, 
regarding the uninteresting thought processes of cats and dogs.

I reiterate, Cyc may provide a useful database of results, and may have 
practical applications. That's good enough for me and most to justify continuance.
But we would probably learn more about the fundamental principles underlying 
intelligence PER SE if we could model a beetle.  Having done that, we
can then move on to a Beatle, say Ringo.  

lantz@Apple.COM (Bob Lantz) (08/06/90)

pluto@zaius.ucsd.edu (Mark Plutowski) writes:

>Not to take away anything from the Cyc project, which may be an 
>important step for AI, I must agree with Mr. Malcolm here, and
>in particular, take exception to your immediately preceding remark, 
>regarding the uninteresting thought processes of cats and dogs.

My point is that due to its representation of human knowledge, Cyc
is particularly interesting (to me, obviously) and a good idea; it has
produced useful and interesting  results and is something that is sorely
needed for work with natural language.

>But we would probably learn more about the fundamental principles underlying 
>intelligence PER SE if we could model a beetle.

Might be useful for certain applications, e.g. entomology, ecology,
biosystem dynamics.  A major problem with AI systems to date has been
their highly specialized nature, the worst example being "expert systems"
which solve specialized problems quite well, but are really shallow and
brittle: the information encoded in them is highly specialized and lacks
connections to more generalized information.  As a result, they perform
poorly outside their specific domain, and do not interact synergistically
with other, similar, specialized systems. Cyc provides a framework of general
knowledge and common-sense reasoning, which more specialized systems 
could be built on top of and into.  Such systems would be able to
handle problems in a variety of domains, at multiple levels, and translate
between them.

> Having done that, we
>can then move on to a Beatle, say Ringo.  

Cyc probably has the advantage, because it already has information
on the domain of human general knowledge.  The Beetle-system would
have information on the insect world (e.g. insect foods, predators,
smells, self-defense, mound construction, etc.) which would not be
terribly useful to Ringo (even if he has been reduced to doing Sun
Country wine-cooler ads...)

It's possible that some of the thought processes (e.g. eat when you're
hungry, fight or flee) might be useful to him, but more sophisticated
reasoning (e.g. sue people who violate your copyright :-)) would
have to be incorporated to make the Beatle system useful.

Bob

almquist@hpavla.AVO.HP.COM (Mike Almquist) (08/06/90)

I think Cyc is a worthwhile project.  For those of you that criticize, don't
be too quick to judge.  What are you doing to advance the field?  Apparently
the project has a few more years left (I think it was 5+ years at least).  BUT,
according to Lenat, he is right on schedule.  Lets give the guy his few years
before we make our decisions on the usefullness of his project.  I feel it is
a worthwhile step.  Regardless of his final results, his knowledge base will
definately be invaluable in the years to come.  For instance, using his
knowledge base one could perhaps successfull or more realistically model a 
virtual reality.  I too am hesitant to agree that Lenat will be successfull in
creating a psuedo-sentinel computer but, who knows, only time will tell.
Remember the opposition the Wright brothers met?  Perhaps we will all be
astounded - maybe it will fly (-:

- Mike Almquist
  HP, Avondale, PA
  almquist@hpavla.avo.hp.com

jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) (08/10/90)

In article <9300001@hpavla.AVO.HP.COM> almquist@hpavla.AVO.HP.COM (Mike Almquist) writes:
>knowledge base one could perhaps successfull or more realistically model a 
>virtual reality.  I too am hesitant to agree that Lenat will be successfull in

This is a bit off the subject, but what's a "virtual reality"?  I haven't
heard the term until today, and I've seen it 3 times!

Is it a type of sophisticated user interface/AI control to present an
entire scenario to a person?  Or something more interesting?

- Jim Ruehlin

jgdavis@waikato.ac.nz (08/10/90)

In article <9300001@hpavla.AVO.HP.COM>, almquist@hpavla.AVO.HP.COM (Mike Almquist) writes:
> I think Cyc is a worthwhile project.  For those of you that criticize, don't
> be too quick to judge.  What are you doing to advance the field?  Apparently
> the project has a few more years left (I think it was 5+ years at least).  BUT,
> according to Lenat, he is right on schedule.  Lets give the guy his few years
> before we make our decisions on the usefullness of his project.  I feel it is
> a worthwhile step.  Regardless of his final results, his knowledge base will
> definately be invaluable in the years to come.  For instance, using his
> knowledge base one could perhaps successfull or more realistically model a 
> virtual reality.  I too am hesitant to agree that Lenat will be successfull in
> creating a psuedo-sentinel computer but, who knows, only time will tell.
> Remember the opposition the Wright brothers met?  Perhaps we will all be
> astounded - maybe it will fly (-:
> 
> - Mike Almquist
>   HP, Avondale, PA


>   almquist@hpavla.avo.hp.com




Remember also the success or lack of it of the alchemists,

-Joseph G. Davis

raja@copper.ucs.indiana.edu (Raja Sooriamurthi) (08/10/90)

In comp.ai you write:

>Remember also the success or lack of it of the alchemists,
>-Joseph G. Davis

Yes, the alchemists did fail in their  prime goal of discovering a
'load-stone' to turn base metals to gold. But in the process of their
search they did find a number of useful alloys and advanced medieval
chemistry to a large extent. 

Agreed their prime objective wasn't fruitful but the side-effects were
significant. While I don't want to say that project cycs prime mission
isn't going to succeed I've no doubt that it has been (will be)
successful in other ways.

- Raja


---------------------------------------------------------------------------
Raja Sooriamurthi                              Computer Science Department
raja@copper.ucs.indiana.edu                       Indiana University
---------------------------------------------------------------------------

braum@Sol33.essex.ac.uk (Branscombe M P C) (08/16/90)

surely the problem of modelling/recreating animal intelligence is that we cannot have subjective knowledge of the processes - is it Chomsky who says the best study of language will be due to introspection by a native speaker
Mary Branscombe

seim@tubopal.UUCP (Kai Seim) (08/17/90)

in article Re: Doug Lenat's "cyc" project raja@copper.ucs.indiana.edu 
(Raja Sooriamurthi) writes:

>Yes, the alchemists did fail in their  prime goal of discovering a
>'load-stone' to turn base metals to gold. But in the process of their
>search they did find a number of useful alloys and advanced medieval
>chemistry to a large extent.
>
>Agreed their prime objective wasn't fruitful but the side-effects were
>significant. While I don't want to say that project cycs prime mission
>isn't going to succeed I've no doubt that it has been (will be)
>successful in other ways.
>
>- Raja

i ever thougt, that science works in an other way to find new knowledge
about (interesting:-) facts. Perhaps you should go back to (prison and don't
get 2000 $:-)!!

The problem is, that science should have a hypothesis, proof it, and if it's
right, you won new knowledge, if it isn't, you to find a new hypothesis and
try again (that's very short for beginners-:)

I really don't think, that you can argue with side-effects of stupid projects.
It has nothing to do with science, but very much with alchemistry, as Joseph
G. Davis wrote. 

Greetings

Kai Seim


------------------------------------------------------------------------
Kai Seim				Department of Computer Science
TU Berlin				of the University of Technology
Fachbereich Informatik			Berlin
Franklinstr. 28/29
D 1000 Berlin 10			e-mail: seim@opal.cs.tu-berlin.de
phone:	+49 30 314-73285		fax:  +49 30 314-24891
-------------------------------------------------------------------------

punch@pleiades.cps.msu.edu (Bill Punch) (08/18/90)

Look, I must admit to being mostly a critic of cyc, it smacked too much
of the Starwars "lets throw enough money at it, that will probably solve
it" approach. 

But the other side is that Lenat has a "vision", not just a goal. That
is, there is this mismash in AI because all the real problems are too
hard, so folks get caught up playing with small, simpler problems. He
has taken the bull by the horns and said something like "Look, let's see
if a large amount of KNOWLEDGE appropriately managed will get us to
those hard problems". It is a premise of sorts, but really a vision of
how to get out of the morass of little systems solving little problems. 

I think he WILL learn something about knowledge management, which is
what he set out to do. I also hope that cyc will be enabling for other
groups to go beyond small AI. One can argue with the techniques used,
methods produced etc., but Lenat has at least taken a bold step
(presumably) forward. 

Finally, it seems to me that it is the people with vision that have
guided and will continue to guide the path of AI, people shooting for
broad horizions not just the next hill.

			>>>bill<<<

eichmann@cs.wvu.wvnet.edu (David Eichmann) (08/19/90)

punch@pleiades.cps.msu.edu (Bill Punch) writes:

>Look, I must admit to being mostly a critic of cyc, it smacked too much
>of the Starwars "lets throw enough money at it, that will probably solve
>it" approach. 

>But the other side is that Lenat has a "vision", not just a goal. ...
>...
>I think he WILL learn something about knowledge management, which is
>what he set out to do. I also hope that cyc will be enabling for other
>groups to go beyond small AI. One can argue with the techniques used,
>methods produced etc., but Lenat has at least taken a bold step
>(presumably) forward. 

>Finally, it seems to me that it is the people with vision that have
>guided and will continue to guide the path of AI, people shooting for
>broad horizions not just the next hill.

>			>>>bill<<<

   I agree with much of the sentiment that Bill expresses here; and extend
it to the attitude of most Americans today (please don't start a US-Japan
style flame war here!).  Academic and commercial entrepreneurship has lost
its long-distance focus and seems only to look at the short-term possibilities.
Lenat's project is an excellent example of what can be gained (and lost!)
by reaching beyond the next hill...

- Dave Eichmann

ylikoski@csc.fi (08/19/90)

In article <1990Jul31.034417.19350@nixtdc.uucp>, doug@nixtdc.uucp (Doug Moen) writes:
> The latest issue of "Discover" magazine has an article
> on Doug Lenat's "cyc" project, which is nothing less
> than a frame-based database of all of the common-place
> knowledge of the world that a typical human adult would
> have.
> ...
>  Lenat says it will be able to read and understand
> English text in 5 years, at which point the rate at which
> it can be educated will greatly increase.  Other authorities
> apparently feel he is too optimistic.
> ...
> My feeling is that someone is going to build an artificial
> mind sooner or later, and cyc is a plausible first step.
> 
> Doug Moen.

This resembles the "Advice Taker" project idea once proposed
by John McCarthy.  Perhaps JMC hinself could tell us about the Advice
Taker (or he might have useful contributions based on his Advice
Taker work).

Why not have the possibility to build incrementally (by means of the NL
interface) not only propositional but also procedural information
into the "cyc"?

-------------------------------------------------------------------------------
Antti (Andy) Ylikoski              ! Internet: YLIKOSKI@CSC.FI
Helsinki University of Technology  ! UUCP    : ylikoski@opmvax.kpo.fi
Helsinki, Finland                  !
-------------------------------------------------------------------------------

lantz@Apple.COM (Bob Lantz) (08/20/90)

ylikoski@csc.fi writes:

>Why not have the possibility to build incrementally (by means of the NL
>interface) not only propositional but also procedural information
>into the "cyc"?

I believe that although Cyc optimizes (i.e. hard-codes in lisp) most 
common inferences (i.e. procedural information,) the information in
the system, propositional and procedural, is also encoded in declarative
form.  In general, the system is designed to support multiple inference
schemes.  The rationale behind this is that certain inference schemes
may be better for certain problems, and a clear-cut "winning" method
has not been discovered.  Currently, general-purpose algorithms can
reach the same conclusions as the optimized algorithms, (because of the
declarative encoding,) but at a much slower rate.

The natural language interface is not complete enough to enter information
into the system, I believe. To complete the natural language system,
they need a more complete knowledge base, so there's a sort of bootstrap
problem which they are solving by encoding information by hand.  Lenat
and others have compared the current process with "teaching by brain 
surgery."  

>Antti (Andy) Ylikoski              ! Internet: YLIKOSKI@CSC.FI

Bob Lantz

jtr@cs.exeter.ac.uk (Jason Trenouth) (08/21/90)

I've just read the following article:

D. B. Lenat, R. V. Guha, K. Pittman, D. Pratt, and M. Shepherd (1990)
	"CYC: towards programs with common sense" CACM 33(8) 30-49,
	August 1990.

Cyc is a very large knowledge base whose language CycL has two levels: the
epistemological level (EL) and the heuristic level (HL). The EL is first order
predicate calculus augmented with reification (statements about statements)
and reflection (assumptions support beliefs). This level exists to provide a
simple semantics and an easy medium of communication with users. The HL is an
optimisation layer that uses special purpose data structures and procedures do
most of the inferencing.

Cyc is being built by focusing on senarios (e.g. "buying something") and
adding the knowledge that allows the Cyc system to answer common sense
questions about them.

				***

I tend to agree with the philosophy behind Cyc for 3 reasons:

<1>
In addition to studying the individual pieces of the AI puzzle, the research
establishment should also be funding work on the "big picture". Of course
funding should probably be biased (for the moment) towards piecemeal projects
since we don't seem to fully understand any of the components of AI.  However,
regular "stocking taking" is a good thing to do: can the current
techniques/technologies fit together? The Cyc project (and the basic-agent
projects) is (are) justifiable as _integration_ research.

<2>
The scale of Cyc also makes it unique. The largest knowledge based
systems around have O(100000) assertions or rules, and the majority have only
O(100)-O(1000). Lenat aims to keep building on Cyc until it has O(100000000)
such objects. Clearly, this represents a _new_ test for each of the component
technologies. So, the Cyc project is justifiable as _very large scale_
research.

<3>
The result of the Cyc project will be a resource that the whole AI community
can benefit from. A very large public-domain knowledge-base can save
researchers from perpetually re-inventing the wheel.  Instead they will be
able to concentrate on their particular interest e.g.  planning, faster
inference, etc. A common knowledge-base will allow better comparisons between
these parasite projects. A common sense knowledge base can also be used to
remove the bittleness from the current generation of expert systems. So, the
Cyc project is justifiable as a _resource_ provider.

				***

Ultimately, the Cyc project is justifiable purely because it takes a
_different approach_. In this sense I feel it is a bed-fellow of (<whisper>:)
Connectionism. What kind of self-respecting knowledge-base-system designer
would entertain building a inference mechanism that only searched from one
direction?! Equally, for efficiency at the methodological level, a certain
amount of pluralism is "a Good Thing":

Research Table       TOP-DOWN                   BOTTOM-UP

DEPTH-FIRST          Mainstream Symbolic AI     Current Connectionism

BREADTH-FIRST        Cyc Project                Future Connectionism?

In other words: there is room for Cyc.

	Ciao - JT
--
______________________________________________________________________________
| Jason Trenouth,                        | JANET:  jtr@uk.ac.exeter.cs       |
| Comp. Sci. Dept., Exeter Univ.,        | UUCP:   jtr@expya.uucp            |
| Devon, EX4 4PT, UK. TEL: (0392) 264061 | BITNET: jtr%uk.ac.exeter.cs@ukacrl|