LAWS@SRI-AI.ARPA (08/01/85)
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI> AIList Digest Thursday, 1 Aug 1985 Volume 3 : Issue 102 Today's Topics: Queries - PRESS & Loglan, Linguistics - Aymara, Expert Systems - Definition, Games - Chess Programs and Cheating, AI Tools - POPLOG ---------------------------------------------------------------------- Date: Thu, 25 Jul 85 10:08 EST From: D E Stevenson <dsteven%clemson.csnet@csnet-relay.arpa> Subject: Information on PRESS I would like to get a copy of PRESS. Can anyone tell me how to obtain one? PRESS is the name of the symbolic algebra system that he developed at Edinburgh. I have read spots here and there about it, mostly in the applied math literature. It is written in PROLOG and is reputed to be very fast. I asked for PROLOG-based systems on the symalg net; PRESS was the only system identified. I am interested in functional/logic programming and numerical analysis; I thought I might get a copy and see what I could do with it. Steve Stevenson (803) 656-5880 ------------------------------ Date: Sun 28 Jul 85 18:11:09-PDT From: FIRSCHEIN@SRI-AI.ARPA Subject: Loglan LOGLAN was (is?) a language designed to test the Sapir-Whorf hypothesis that the natural languages limit human thought. The Loglan Institute was set up to publish books on the subject and to carry out investigations in loglan. Does anyone know whether the Loglan Institute still exists and what has been done with loglan? Does anyone have a current address for them? [The most recent address I have is The Loglan Institute, Inc., 2261 Soledad Rancho Road, San Diego, CA 92109. -- KIL ] ------------------------------ Date: Mon 29 Jul 85 10:59:04-PDT From: Ken Laws <Laws@SRI-AI.ARPA> Subject: Aymara Robert Van Valin ("ucdavis!harpo!lakhota"@BERKELEY) sent me a clipping from the SSILA Newsletter. It's a letter from Dr. M.J. Hardman-de-Bautista, Director of the Aymara Language Materials Program, stressing that Ivan Guzman de Rojas is not associated with the ALMP, does not himself speak Aymara, and bases his work in machine translation on a grammar and dictionary written over 400 years ago by a Jesuit priest. He claims that Mr. Guzman's published examples of Aymara are nearly all grammatically incorrect and that the stated meanings for acceptable sentences are often wildly inaccurate. "His poor understanding of Aymara word and sentence structure results in forms that are simply unintelligible to the Aymara." Which is not to say that Guzman's translation program can't work, but it does cast a suspicious light on the matter. ------------------------------ Date: 30 Jul 85 15:38 PDT From: Miller.pasa@Xerox.ARPA Subject: Defining the Expert System I am spending this summer as an intern for Xerox AI Systems group where part of my task is to come up with a working definition of what constitutes an "Expert System." Having done some rather extensive reading on AI in general and expert systems in particular throughout the past month, I have come to two conclusions: First, due perhaps to media hype, the term "expert system" tends to get bantered about extremely loosely and broadly and is applied to a wide variety of programs and packages. Second, the only definitions which seem to exist in testbooks, articles, or company literature all seem to go something like this: "An expert system is a computer program which does what an expert does." While this definition is basic, I would like some more detail. So here's the question: What do you, as a knowledgeable person in the field of AI, consider to be the necessary minimum attributes for an "Expert System?" Is it fair to give the same title to both CADUCEUS and to 'Tell Me Doctor' from Apple? Why or why not? Can you build an "Expert System" with ART? How about TOPSI from Dynamic Master Systems (1000 rules maximum, forward-chaining, $75) ? How many rules does it take to make a system 'expert'? What kind (and how large) of a domain must an "expert system" address? Etc. If you've got (or would care to write) a working definition of your own, I'd love to hear it. Otherwise, I'd really appreciate your thoughts on any of the above questions or any others that may come to mind. Pointers towards reading sources probably wouldn't hurt. Look at this as a very informal survey of the field-- linguistically speaking, a term can only be defined by those who use it. If anybody's interested, I'll be glad to compile the results and send a copy. Please reply to me at Miller.pasa@Xerox.ARPA --Chris Miller [Alex Goodall supplies the following definitions in The Guide to Expert Systems (published by Learned Information): An expert system is a computer system that performs functions similar to those normally performed by a human expert. An expert system is a computer system that uses a representation of human expertise in a specialist domain in order to perform functions similar to those normally performed by a human expert in that domain. An expert system is a computer system that operates by applying an inference mechanism to a body of specialist expertise represented in the form of 'knowledge'. He prefers the latter, but discusses all three in his first chapter. Feigenbaum, in Knowledge Engineering for the 1980's (quoted by Gevarter in An Overview of Expert Systems and by Kolbus and Mazzetti in Artificial Intelligence Emerges) says: An 'expert system' is an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solution. The knowledge necessary to perform at such a level, plus the inference procedures used, can be thought of as a model of the expertise of the best practitioners of the field. The knowledge of an expert system consists of facts and heuristics. The 'facts' constitute a body of information that is widely shared, publicly available, and generally agreed upon by experts in a field. The 'heuristics' are mostly private, little-discussed rules of good judgement (rules of plausible reasoning, rules of good guessing) that characterize expert-level decision making in the field. The performance level of an expert system is primarily a function of the size and quality of the knowledge base that it possesses. I don't care for the words "intelligent" and "difficult" in the first paragraph, but the intention is clear. As for size, expert systems for process control (e.g., using fuzzy logic or qualitative "derivatives") can be quite small. I remember a news note in Expert Systems (a journal from Learned Information) about a system with 7 rules that was said to function well. -- KIL] ------------------------------ Date: Sun 28 Jul 85 15:11:41-EDT From: Oolong <WESALUM.A-LIAO-85@KLA.WESLYN> Reply-to: LIAO%Weslyn.Bitnet@WISCVM.ARPA Subject: More on Chess Programs and Cheating In reading Dr. Laws objection, let me begin by saying that I certainly agree that programs are better chess machines than people are. Further, I agree that superior memory and speed in a computer does NOT give a program an unfair advantage. But perhaps I should clarify my position a bit more: I believe that chess programs with moves written INTO the program cheat in the sense that they will ALWAYS carry around ENCODED (and thus represented) moves. Human players do not, on the whole, do any such thing. Perhaps it would be best to try a Searleian approach to the problem. In particular, you might be right about the way humans play chess...we may have some moves memorized and yet not always have them actively represented. Perhaps to some extent, these might one of (or at least part of) Searle's unconcious Intentional states. However, I'm not convinced of that this position completely accounts for the way we play. Consider players who are familiar with each other's form of play. One cannot store every move made in every game played and associate each game with the correct player (that goes for programs as well). Still one recognizes particular COMBINATIONS of moves ("chunking" a la Hofstadter) through experience of following the other player's games and, moreover, direct play reinforces those experiences. As Searle would put it - these experiences/practices create capacities presumably realized as neural pathways (a sort of learning, if you will). So in effect, the "practiced moves" become part of the background and never become embedded/encoded representations. This background only creates the capacity to create the representations needed to decide what move to make (i.e. to recognize a pattern of moves made and then decide what moves are needed thwart such a strategy). Certainly, if one chooses to memorize particular moves, that is one's perogative, but on a whole, we don't do that. If you will notice, this is the reason I argued for the notion of "playing from our own experiences". This position that I hold has the implication that we recognize strategies by the results of our experience and so it is actually a part of us. I think the interpretation of "run what ya brung" does not escape the problem of the program playing by its author's experiences and not its own. Now let's consider the situation where two players are not familiar with each other's form of play. Certainly, there can be no pre-memorized set of optimal opening moves since you have no experience with this player's strategic tendancies. Yet, how is it that you open with your favorite move when you do not know what else to do. Do you do it thinking "This is the right move to make", or do you just move from experience? How is it then you decide on what strategy to use? Presumably, it makes more sense, perhaps, to say that we use memorized moves going into such a game but use our background (thus experience) to recognize what the other person is doing. In this way a human player can "probe" the other player's strategy, though this probing technique may be an inefficient way of deciding the optimal strategy. However, this relies on experience and again, a computer with built in moves cannot "probe" if it MUST to rely on built-in moves (i.e. experiences not its own). In fact, this is a form of learning and the acquirement of experience (a la Searle). Personally, I do not see how my position differs from Mr. Jennings - I too believe that a computer should "learn how to play chess" before it is allowed to play in a tournament rather than rely on moves ENCODED into the program. I see one major problem however - one may keep entire games on disk/tape for use later on in other tournaments with other players but after a while you may exceed disk/tape memory. One may object by saying, "Well, we could get a program to convenietly forget certain moves (etc) and install the better ones." My problem with that response is the question "What constitutes moves to be forgotten?" Presumably, all this is a question of Intentionality. After reading Searle's chapter on the "Background" (from "Intentionality") I am beginning to suspect that we may just forget the details of particular capacities and retain some sort of skeletal structure of that capacity (whatever that maybe). Just what is forgotten and how it is forgotten is a question I offer to the forum for consideration. - drew liao ------------------------------ Date: 23 Jul 1985 23:58:11-BST From: Aaron Sloman <aarons%svgv@ucl-cs> Subject: POPLOG - A mixed language development system. [Forwarded from the Prolog Digest by Laws@SRI-AI.] Poplog is available on VAX and DEC 8600 computers. It includes Prolog (compiled to machine code), Common Lisp (large subset ready now, remainder available early 1986), POP-11 (comparable in power to Common Lisp, but uses a PASCAL-like syntax), VED an integrated multi-window multi-buffer screen editor, which can be used for all interactions with programs, operating system utilities, online help, program libraries, teaching libraries, etc. VED includes 'compile this procedure' 'compile from here to here' 'splice output into current file' etc.) Incremental compilers are provided for Prolog, Lisp, and POP-11. All the languages compile to the same intermediate POPLOG 'Virtual machine' language, which is then compiled to machine code. The 'syscompile' facilities make it easy to add new front end compilers for additional languages, which all share the same back-end compiler, editor and environmental facilities. Mixed language facilities allow sharing of libraries without re-coding and also allow portions of a program to be written in the language which is most suitable. Approximate recent Prolog benchmarks, for naive reverse test, without mode declarations: VAX/780 + VMS 4.2 KLIPS VAX/750 + Unix 4.2 2.4 KLIPS (750+Systime accelerator) DEC 8600 13.0 KLIPS SUN2 + Unix 4.2 2.5 KLIPS (also HP 9000/200) GEC-63 + Unix V approx 6 KLIPS The Prolog is being substantially re-written, for greater modularity and improved efficiency. Mode declarations should be available late 1985, giving substantial speed increase. POP-11 and Common Lisp include both dynamic and lexical scoping, a wide range of data-types, strings, arrays, infinite precision arithmetic, hashed 'properties', etc. (Not yet packages, rationals or complex numbers.) POP-11 includes a pattern-matcher (one-way unification) with segment variables and pattern-restrictors. External_load now allows 'external' modules to be linked in and unlinked dynamically (e.g. programs written in C, Fortran, Pascal, etc.). This almost amounts to a 'rapid prototyping' incremental compiler for such languages. A considerable number of AI-projects funded by the UK Alvey Programme in universities and industry now use a mixture of Prolog and POP-11, within Poplog. Enquiries: UK Educational institutions: Alison Mudd, Cognitive Studies Programme, Sussex University, Brighton, England. 0273 606755 -- Aaron Sloman ------------------------------ End of AIList Digest ********************