[comp.ai.neural-nets] I've read this somewhere...

griff@intelob.biin.com (Richard Griffith) (01/31/89)

I seem to recall reading something like the following....

   some researchers in neural-networking have created a [net|language|
system|...] which was fed the equivilancy of the basis for mathmetical
set theory, within a short time (read < 1 day) the system completed the 
formation of all algebraic theorems, and was going on to calculus....


   Someone here at work would like to know where I read that, but being a
bit of a sensationalist (I know, I know, get your facts straight :-^) I 
have forgotten where I read about that particular project.  I was hoping
someone out there might shed some light on that subject,  so I can pass 
*accurate* info onto my interested co-worker.


                   many thanks in advance,


                              - griff

**************************************************************************
* Richard E. Griffith      *   "Someday soon we'll stop to ponder -      *
*    "griff"		   * 	What on Earth's this spell we're under?  *
* BiiN, Hillsboro Ore.	   *    We made the grade, but still we wonder - *
* (When are we getting	   *    Who the Hell we are?"                    *
*  Our own Usenet node?)   * 		    - Styx "Grand Illusion"      *
**************************************************************************
* These opinions are mine only, `cuz nobody'd pay for drivel like this!  *
**************************************************************************

mesard@bbn.com (Wayne Mesard) (01/31/89)

In article <GRIFF.89Jan30135458@intelob.biin.com> griff@intelob.biin.com (Richard Griffith) writes:
>within a short time (read < 1 day) the system completed the 
>formation of all algebraic theorems, and was going on to calculus....
[...]
>sensationalist

You might be referring to Colossus: The Forbin Project.  It was
developed in Hollywood during the late 70s, but nothing good came of it.

N.B.: :-) :-)

-- 
void Wayne_Mesard();   Mesard@BBN.COM   Bolt Beranek and Newman, Cambridge, MA

pastor@bigburd.PRC.Unisys.COM (Jon Pastor) (02/01/89)

In article <GRIFF.89Jan30135458@intelob.biin.com> griff@intelob.biin.com 
(Richard Griffith) writes:
>
>I seem to recall reading something like the following....
>
>   some researchers in neural-networking have created a [net|language|
>system|...] which was fed the equivilancy of the basis for mathmetical
>set theory, within a short time (read < 1 day) the system completed the 
>formation of all algebraic theorems, and was going on to calculus....
>
>
>   Someone here at work would like to know where I read that, but being a
>bit of a sensationalist (I know, I know, get your facts straight :-^) I 
>have forgotten where I read about that particular project.  I was hoping
>someone out there might shed some light on that subject,  so I can pass 
>*accurate* info onto my interested co-worker.
>
>
[Note: I attempted to reply by mail to this, but it got bounced; anyway, it's an
intriguing enough rumor that it should be squelched (or confirmed, if anyone
know something I don't) quickly -- all the NN community needs is rampant
rumors about spectacular and non-existent successes...]

Well, it sounds a lot like you're talking about AM, a (non-neural-net) program
written by Doug Lenat (at Stanford, I believe) in the late 70s.  Among other 
things, it was given Peano's axioms and "discovered" numbers, arithmetic, and
some abstract concepts like "prime number".  It was really quite interesting,
and there is a (half) book about it published by McGraw-Hill (authors are Lenat
and (Randall) Davis).  AM grew up to be Eurisko; the Knowledge Representation
portion of AM/Eurisko grew up to be CYC, which is an ongoing project of Lenat's
at MCC.

AM was a frame-based system, with lots of attached rules and procedures.  It
was in no way related to neural nets, nor is Lenat's current work on CYC.  
Sorry to be the bearer of what I assume is bad news, but the coincidence would
have been too remarkable for words. 

mmoore@cattell.psych.upenn.edu (Mike Moore) (02/01/89)

What you describe sounds vaguely like James Lenat's '78 Stanford
dissertation on "AM".  This system was not connectionist though,
and didn't get as far as calculus, but it did "rediscover" some
interesting theorem about primes.-- 
Mike Moore			Computer Operations Manager, Psychology Department
mmoore%psych@upenn.edu		University of Pennslyvania          (215) 898-2141

aboulang@bbn.com (Albert Boulanger) (02/01/89)

In article <GRIFF.89Jan30135458@intelob.biin.com> griff@intelob.biin.com 
(Richard Griffith) writes:
>
>I seem to recall reading something like the following....
>
>   some researchers in neural-networking have created a [net|language|
>system|...] which was fed the equivilancy of the basis for mathmetical
>set theory, within a short time (read < 1 day) the system completed the 
>formation of all algebraic theorems, and was going on to calculus....
>
>
>   Someone here at work would like to know where I read that, but being a
>bit of a sensationalist (I know, I know, get your facts straight :-^) I 
>have forgotten where I read about that particular project.  I was hoping
>someone out there might shed some light on that subject,  so I can pass 
>*accurate* info onto my interested co-worker.
>
>

I know of one neural-net reference that did number-theoretic discovery stuff:

"An Associative Hierarchal Self-Organizing System"
Barry R. Davis, IEEE Trans. Systems, Man, & Cybernetics, SMC-15, No.
4, July/August 1985, pp 570-579.

He uses some ideas developed by Stuart Geman.



Albert Boulanger
BBN Systems & Technologies Corp.
aboulanger@bbn.com

pattis@june.cs.washington.edu (Richard Pattis) (02/01/89)

Sounds like a scene from the movie, "The Forbin Project".

Rich

ke@otter.hpl.hp.com (Kave Eshghi) (02/01/89)

I think the system you are talking about is the AM system developed by
Doug Lenat at Stanford in the late 70's. It is not a neural net system- it is
an orthodox AI hack. What it does is it starts off from some basic mathematical
notions, and explores their consequences.

This system generated a lot of controversy because some researchers claimed 
that they could not reproduce its behaviour from the published accounts of how
it works.

Cheers
Kave Eshghi