root@blakex.RAIDERNET.COM (Blake McBride) (03/19/91)
I have a few questions relating to "NETL: A System for Representing and Using Real-World Knowledge" and Scott Fahlman (the author). A. Is Scott Fahlman on the net, and if so where? B. Whats been doing with the NETL project? C. Does anyone have C/Lisp code for a NETL simulator? Thanks for the info... Blake. -- Blake McBride Home (615) 790-8521 3020 Liberty Hills Drive Work (615) 790-1088 Franklin, TN 37064 root@blakex.raidernet.com U.S.A. ...!uunet!mjbtn!raider!blakex!root
sef@sef-pmax.slisp.cs.cmu.edu (03/29/91)
I don't read this newsgroup regularly -- it eats up too much time -- but I was just browsing and saw Blake's McBride's post asking what has become of the NETL system. Since a few other people have asked me about this lately, I thought it might be useful to post the answer. First, I have been at Carnegie Mellon University, School of Computer Science, since 1978. I can be reached by Internet at <fahlman@cs.cmu.edu>. For those who have never heard of NETL, it was an early (1977) attempt to embed a knowledge-representation system in massively parallel hardware. Essentially, NETL represents the nodes and links of a semantic network as very simple hardware devices capable of passing around single-bit markers. Such a machine can do property inheritance and some forms of search very fast. For more details, see my book _NETL: A System for Representing and Using Real-World Knowledge_, MIT Press, 1979. A quick summary of what has happened since then: 1. One of my Ph.D. students, Dave Touretzky, undertook the task of putting NETL (or at least the part about multiple inheritance with exceptions) on a firm logical foundation. This work became his Ph.D. thesis, and was published by Morgan-Kaufmann as _The Mathematics of Inheritance Systems_ (1986). There has been considerable subsequent work in this vein by Touretzky, Thomason, Horty, and others. Touretzky is now on the CMU faculty. 2. I spent some time working on ways of implementing a massively parallel NETL machine. In the meantime, Danny Hillis at MIT designed the Connection Machine as a way of implementing NETL-like "data-level parallelism". The Connection machine, generalized to handle a much wider range of problems than just NETL, is now being manufactured by Thinking Machines Corporation. Surprisingly, not much NETL-like work has been done on the CM. I suspect that this is because the current model CM is too small for NETL-like applications, but is large enough for many other useful tasks. 3. While Touretzky and others were trying to formalize some aspects of knowledge representation, I personally was more interested in dealing with the sloppy, fuzzy-edged stuff that didn't quite fit into a clean symbolic framework. For a while I worked on ideas for a so-called "Thistle" machine that was sort of a fuzzy NETL. It passed around continuous values representing degrees of certainty, strength of evidence, etc. I eventually became convinced that such systems would have to incorporate some powerful form of learning -- building them by hand was so hard that it seemed like a dead end. However, Jerry Feldman and others of the "Rochester School" have gone on to build some impressive value-passing systems by hand. 4. I began working with Geoff Hinton, and gradually came around to his view that a more neuron-like approach, with a distributed representation and some powerful methods for learning from examples, is more likely to lead to an understanding of intelligence than a "localist" system like NETL. Some of the early steps along this path are documented in an article I wrote with Geoff for the January 1987 issue of IEEE Computer. 5. Lately, I've been working on developing better learning algorithms for what are now called "artificial neural networks". This has led to the Quickprop and Cascade-Correlation architectures, among other things. (There was also a period of about 5 years when most of my time went into Common Lisp -- a bit of tool-building that got seriously out of control.) So, for the past 8-10 years, the NETL work has been on the back burner. I still think that there are a number of good ideas about knowledge representation in that work, such as the way contexts are handled. I'm a bit surprised that others working on big real-world knowledge bases haven't picked up some of these ideas, but then I haven't been out there hitting people over the head with them. To me, right now, the question of how to deal with all the fuzzy, messy stuff seems more important, so I'm spending most of my time on the neural-net research. -- Scott Fahlman
hendler@dormouse.cs.umd.edu (Jim Hendler) (04/02/91)
Scott Fahlman writes that little or no follow up work has been done to NETL on the CM. He's only partially right, my students and I have been working on developing a frame-based knowledge rep. language for the CM. The inheritance algorithms in PARKA are more sophisticated than those used by NETL, more information is propagated in the activation stage, and the language is significantly more ambitious than NETL was (the PARKA language, when completed, should be close to equivalent to most of the term subsumption languages being discussed these days). It will also be quite fast, several conference papers and a forthcoming JPDC article discuss the results in inheritance - basically we get linear (order of depth of the network) times for top-down inheritance in multiple inheritance hierarchies. For networks having over 30,000 nodes (averaging 8-10 links per node) we can find all nodes with a given property on time in the order of 1 - 5 seconds (depending on the network topology). These timings were done on random networks. We've also hand crafted a 1000 node network of facts about US states, animals, and agriculture that we are analyzing so as to get a better topological analysis to use in the generation of large random networks. Technical reports describing both the parallel implementation and the language design are available. -Jim Hendler UMCP p.s. work in PARKA is funded by the office of naval research under grant N00014-88-K-0560.