sts@ssc-vax.UUCP (Stanley T Shebs) (08/10/83)
First let me get in a one last (?) remark about where the Japanese are in AI - pattern recognition and robotics are useful but marginal in the AI world. Some of the pattern recognition work seems to be making the same conclusions now that real AI workers made ten years ago (those who don't know history are doomed to repeat it!). Now on to the good stuff. I have been thinking about knowledge representation (KR) recently and made some interesting (to me, anyway) observations. 1. Certain KRs tend to show up again and again, though perhaps in well-disguised forms. 2. All the existing KRs can be cast into something like an attribute-value representation. Space does not permit going into all the details, but as an example, the PHRAN language analyzer from Berkeley is actually a specialized production rule system, although its origins were elsewhere (in parsers using demons). Semantic nets are considered obsolete and ad hoc, but predicate logic reps end up looking an awful lot like a net (so does a sizeable frame system). A production rule has two attributes: the condition and the action. Object-oriented programming (smalltalk and flavors) uses the concept of attributes (instance variables) attached to objects. There are other examples. Question: is there something fundamentally important and inescapable about attribute-value pairs attached to symbols? (ordinary program code is a representation of knowledge, but doesn't look like av-pairs - is it a valid counterexample?) What other possible KRs are there? Certain KRs (such as RLL (which is really a very interesting system)) claim to be universal and capable of representing anything. Are there any particularly difficult concepts that *no* KR has been able to represent (even in a crude way)? What is so difficult about those concepts, if any such exist? Just stirring up the mud, stan the leprechaun hacker ssc-vax!sts (soon utah-cs)