LIN%MIT-ML@sri-unix.UUCP (01/02/84)
From: Herb Lin <LIN @ MIT-ML> Speaking as an interested outsider to AI, I have a few questions that I hope someone can answer in non-jargon. Any help is greatly appreciated: 1. Just why is a language like LISP better for doing AI stuff than a language like PASCAL or ADA? In what sense is LISP "more natural" for simulating cognitive processes? Why can't you do this in more tightly structured languages like PASCAL? 2. What is the significance of not distinguishing between data and program in LISP? How does this help? 3. What is the difference between decisions made in a production system (as I understand it, a production is a construct of the form IF X is true, then do Y, where X is a condition and Y is a procedure), and decisions made in a PASCAL program (in which IF statements also have the same (superficial) form). many thanks.
SCHMIDT@SUMEX-AIM.ARPA (01/04/84)
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA> You might want to read an article by Beau Sheil (Xerox PARC) in the February '83 issue of Datamation called "Power tools for programmers." It is mostly about the Interlisp-D programming environment, but might give you some insights about LISP in general. I'll offer three other reasons, though. Algol family languages lack the datatypes to conveniently implement a large number of knowledge representation schemes. Ditto wrt. rules. Try to imagine setting up a pascal record structure to embody the rules "If I have less than half of a tank of gas then I have as a goal stopping at a gas station" & "If I am carrying valuable goods, then I should avoid highway bandits." You could write pascal CODE that sort of implemented the above, but DATA would be extremely difficult. You would almost have to write a lisp interpreter in pascal to deal with it. And then, when you've done that, try writing a compiler that will take your pascal data structures and generate native code for the machine in question! Now, do it on the fly, as a knowledge engineer is augmenting the knowledge base! Algol languages have a tedious development cycle because they typically do not let a user load/link the same module many times as he debugs it. He typically has to relink the entire system after every edit. This prevents much in the way of incremental compilation, and makes such languages tedious to debug in. This is an argument against the languages in general, and doesn't apply to AI explicitly. The AI community feels this as a pressure more, though, perhaps because it tends to build such large systems. Furthermore, consider that most bugs in non-AI systems show up at compile time. If a flaw is in the KNOWLEDGE itself in an AI system, however, the flaws will only show up in the form of incorrect (unintelligent?) behavior. Typically only lisp-like languages provide the run-time tools to diagnose such problems. In Pascal, etc, the programmer would have to go back and explicitly put all sorts of debugging hooks into the system, which is both time consuming, and is not very clean. --Christopher