PROLOG-REQUEST@SUSHI.STANFORD.EDU (Chuck Restivo, The Moderator) (12/22/87)
PROLOG Digest Wednesday, 23 Dec 1987 Volume 5 : Issue 98 Today's Topics: Query - Inductive Consequence, Announcement - Call for Papers, Theory - Complexity, LP Library - Reviews ---------------------------------------------------------------------- Date: 16 Dec 87 00:14:00 GMT From: reddy@b.cs.uiuc.edu Subject: Inductive consequence I am looking for pointers to any work in logic programming dealing with inductive consequence, where the notion of inductive consequence is defined by one of the following: . S is an inductive consequence of A if every ground instance of S follows from A. . S is an inductive consequence of A if the least model of S is the same as the least model of A union S. . S is an inductive consequence of A if A union S is consistent. I think I saw some work by Kowalski refering to one of these definitons, but I can't place it. -- Uday Reddy ------------------------------ Date: Mon, 21 Dec 87 14:27:47 -0800 From: Richard Nelson <nelson@ICS.UCI.EDU> Subject: Call for Papers ICEBOL3 April 21-22, 1988 Dakota State College Madison, SD 57042 ICEBOL3, the International Conference on Symbolic and Logical Computing, is designed for teachers, scholars, and programmers who want to meet to exchange ideas about non-numeric computing. In addition to a focus on SNOBOL, SPITBOL, and Icon, ICEBOL3 will feature introductory and technical presentations on other dangerously powerful computer languages such as Prolog and LISP, as well as on applications of BASIC, Pascal, and FORTRAN for processing strings of characters. Topics of discussion will include artificial intelligence, expert systems, desk-top publishing, and a wide range of analyses of texts in English and other natural languages. Parallel tracks of concurrent sessions are planned: some for experienced computer users and others for interested novices. Both mainframe and microcomputer applications will be discussed. ICEBOL's coffee breaks, social hours, lunches, and banquet will provide a series of opportunities for participants to meet and informally exchange information. Sessions will be scheduled for "birds of a feather" to discuss common interests (for example, BASIC users group, implementations of SNOBOL, computer generated poetry). Call For Papers Abstracts (minimum of 250 words) or full texts of papers to be read at ICEBOL3 are invited on any application of non-numeric programming. Planned sessions include the following: artificial intelligence expert systems natural language processing analysis of literary texts (including bibliography, concordance, and index preparation) linguistic and lexical analysis (including parsing and machine translation) preparation of text for electronic publishing computer assisted instruction grammar and style checkers music analysis. Papers must be in English and should not exceed twenty minutes reading time. Abstracts and papers should be received by January 15, 1988. Notification of acceptance will follow promptly. Papers will be published in ICEBOL3 Proceedings. Presentations at previous ICEBOL conferences were made by Susan Hockey (Oxford), Ralph Griswold (Arizona), James Gimpel (Lehigh), Mark Emmer (Catspaw, Inc.), Robert Dewar (New York University), and many others. Copies of ICEBOL 86 Proceedings are available. ICEBOL3 is sponsored by The Division of Liberal Arts and The Business and Education Institute of DAKOTA STATE COLLEGE Madison, South Dakota For Further Information All correspondence including abstracts and papers as well as requests for registration materials should be sent to: Eric Johnson ICEBOL Director 114 Beadle Hall Dakota State College Madison, SD 57042 U.S.A. (605) 256-5270 Inquiries, abstracts, and correspondence may also be sent via electronic mail to: ERIC @ SDNET (BITNET) ------------------------------ Date: Wed, 16 Dec 87 03:00:22 PST From: quintus!ok@Sun.COM (Richard A. O'Keefe) Subject: complexity An alert reader jumped on me for distinguishing between O(N.log(2,N)) and O(N.log(200,N)). Big-Oh notation is of course defined to ignore constant factors. Mea culpa. He went on to say that you cannot take constant factors into account in asymptotic arithmetic. This happens to be true for this particular notation, but it is not true in general. The notation I want is f(N) = <o>(g(N)) if lim N->infinity f(N)/g(N) = 1. There is a notation for this "equation": one can write f(N) ~ g(N) but I'd like something I can write in an expression just as you can use big-O or little-o or big-Omega or little-omega. What should I write for <o>? In my reply to this alert reader, I went on to say that not only is it possible to take constant factors into account, it is irresponsible not to do so. He misunderstood this. YES if you want to know the TIME that an algorithm will take, that is implementation dependent, and big-Oh is as good as you can get. BUT operation COUNTS are not implementation dependent, and ignoring constant factors there is disastrous. We have seen that ignoring the constant factor in the number of comparisons done by a sorting routine has led a lot of people to use quick-sort (30% slower than merge sort across a wide range of machines and languages). To return to the current example (random permutations), the logarithmic permutation generators generate precisely <o>(N.ceil(log(T,N))) random numbers, and do as many conses, whereas the linear-expected time permutation generator for unbounded- term Prologs generates <o>(1.58N) random numbers and <o>(3.16N) calls to arg/3, while a more complicated version generates <o>(1.06N) random numbers but does <o>(3.50N) calls to arg/3 (more calls to arg because the terms it builds are twice as long, so reading off the result takes twice as many calls). In order to compare these two versions, you have to compare the constant factors! It seems obvious that any algorithm which operates by keeping a pool of numbers which have (already / not yet) been generated and consults them each time it generates a new number must take O(N.log(T,N)) time in a bounded-term language (this time I do mean big-Oh), but it is not clear that some method of generating numbers in batches might not exist. I have been asked to explain how the partition step in the quicksort-like random permutation generator works. See Knuth Vol 2, Section 3.4.2, Algorithm S. I really would like to hear of some answers for this problem, not more questions. ------------------------------ Date: Tue, 15 Dec 87 23:34:15 PST From: quintus!ok@Sun.COM (Richard A. O'Keefe) Subject: Book A couple of days ago I bought a copy of "Relational Database -- Selected Writings" by C. J. Date Addison-Wesley 1986, ISBN 0-201-14196-5 (US$34.95 + tax) One thing which makes me angry is elementary Prolog courses where the author has obviously never troubled to read the data base literature, doesn't know the term "null value", doesn't understand the problems, and has given bad advice. (For example, in one course I saw the author use anonymous variables when he intended existential nulls, and in another the authors in all seriousness suggested that it was a good idea to put 'na' in tables -- these are REAL examples that students were paying a fair chunk of money to attend.) This book should be required reading for anyone presenting such courses. (I hope Mr J.M. and Mr M.E. are reading this...) It is not necessary for them to teach this material, but it IS important that people teaching Prolog should UNDERSTAND it. Nearly everything in the book is good. I'll just mention two of the chapters. Chapter 15 of this book explains some of the problems with null values (I hadn't realised that the outer natural join isn't a projection of the outer equi-join until I read it), and he strongly recommends against the use of null values. (Remember: he is writing about SQL, not Prolog!) Most refreshing. I have been telling people they shouldn't use null values in Prolog (because it hasn't got them), now I have something I can point to which explains why null values are ALWAYS a bad idea. Chapter 19 is really good value: it is all about working out how to represent a problem in relational terms. You should probably read chapter 3 (Why Every Relation Should Have Exactly One Primary Key) and chapter 4 (Referential Integrity) first. There isn't anything staggeringly new there; anyone who has read Codd's paper on RM/T will recognise some of it, and it has a lot in common with the entity/relationship approach. But this is a straightforward presentation based on the plain old relational model. One quotation from this chapter: One of the advantages frequently claimed for relational systems--and I believe fairly--is precisely that database design is easier than it is in a nonrelational system. And just to show how simple it is, here is a paper of some 50 or so closely printed pages to explain how it is done. . . . (:-) Try to talk your library into getting a copy. [Presumably it already has Kent's "Data and Reality".] ------------------------------ Date: Wed, 16 Dec 87 22:35:55 PST From: quintus!ok@Sun.COM (Richard A. O'Keefe) Subject: New Book The book by Warren & Maier is now in the bookshops. Computing with Logic: logic programming with Prolog David Maier/David S. Warren The Benjamin/Cummings Publishing Co. Inc. ISBN 0-8053-6681-4 US$30.35 at Stacey's These two authors are uniquely well qualified to write a book presenting Prolog from a data base perspective; the need has long been felt for something which would explain how to design a Prolog program using the data base so that it makes sense. Unfortunately, that's not the book they chose to write. The preface says This text is appropriate for a senior or first-year graduate course on logic programming. It concentrates on the formal semantics of logic programs [1], automatic theorem proving techniques [2], and efficient implementation of logic languages [3]. It is also an excellent reference for the computer professional wishing to do self-study on the fundamentals of logic programming [4]. We have included numerous examples to illustrate all the major concepts and results. No other text deals with implementation in as much detail [5]. [1] I only bought the book this morning, and have only skimmed most of it. But I couldn't find one line that I recognised as a formal semantics of anything. If, like me, you think that "formal semantics" is the Scott/Strachey, Milner, Jones, &c sort of thing (like Kahn's (?) formal semantics for Prolog that a partial executor was based on), you'll find none of that here. You WILL find a succession of interpreters in Pascal for successively larger sublanguages of Prolog. Most readers will probably find the absence of real formal semantics a virtue rather than otherwise. [2] "Automatic theorem proving techniques" means Input Resolution. If you want to know about automatic theorem proving techniques, buy another book. Automated Reasoning: introduction and applications Wos, Overbeek, Lusk, & Boyle Prentice-Hall ISBN 0-13-054446-9 would be my choice. [3] Not "logic languages". You'll find nothing to help you if you want a forward chaining logic language, or coroutining, or anything else but plain Prolog. It's none the worse for that, but do remember that this IS a >Prolog< book, not a logic language book. (The key idea behind Turbot Prolog -- if it isn't a dead fish, why does it smell bad? -- is not discussed in this book, for example.) [4] Depends on what you mean by "the fundamentals". If you want to know what Prolog does and how a typical Prolog implementation does it, this is a pretty good book. If you want to know why anybody would care, what is the theory behind all this, how to show that a Prolog system is (or is not) correct, or how Prolog ties in with other areas of CS such as relational data base theory, you'll need several other books instead. [5] I haven't got a copy, but doesn't the Kluzniak et al book contain a complete Prolog interpreter in Pascal? There is >one< section in this book (11.8) which describes the WAM. It suffers from the same defect as the WAM tutorial put together at ANL, namely that they've changed all the register names and most of the instruction names (they even have a new name "WPE" for the WAM as a whole). If you want to understand the WAM, get a copy of David H.D. Warren's original paper on it, which is much clearer than most of the "expositions" since. All of the really hard aspects of Prolog implementation have been left out. A major omission is any discussion of garbage collection (well, it's not in the index, table of contents, nor in any of the running heads). I bought this book because I work for a Prolog company, feel I ought to know what the good Prolog books are, wanted D.S.Warren to get some royalties, and was hoping for great things from this book. I'm afraid it's not all that clear to me why anyone else would buy it. Let me say a few good things about it. The book follows a coherent plan, interleaving the presentation of a succession of sublanguages and their interpreters so that the reader will end up with a pretty fair understanding of how (some) Prolog systems and compilers work. This book contains fewer errors than any of the other Prolog books I've seen. For example, reading this book will leave you with less practical grip of Prolog than reading Bratko's, but considerably fewer misconceptions and bad habits. In the sections that I've checked, the authors have used technical terms clearly and consistently. I don't expect anyone to come to any harm from reading this book, which is more than I can say of most Prolog texts. This is a well written book; I'm just disappointed that such good soldiers weren't fighting where the line was thinnest. How does this book rate on the Touchstone (all solutions predicates)? The topic is not treated in depth, but what they do say about setof/3 and bagof/3 is correct. They even present them in the right order: setof/3 first, and then bagof/3 as a variant of setof/3. They present code for findall/3. Now the description of retract/1 in the text is quite correct, but the code for findall/3 is wrong. They write (using long variable names, good point): findall(T, G, L) :- assert_answers(T, G), collect_answers(X), L = X. assert_answers(T, G) :- call(G), assertz(answer_fact(T)), fail. assert_answers(_, _). collect_answers([T|Ts]) :- retract(answer_fact(T)), !, collect_answers(Ts). collect_answers([]). There is so much about this that is right, so many errors they have avoided. Pity nested findalls won't work... By the time they have worked up to full Prolog, you should stop imitating their examples. Their aim is quite explicitly clarity rather than efficiency, and several of the examples will leave a lot of choice points around. (For example, the code they give for testing whether a term is ground uses functor/3 and arg/3 as it should, but letting N be the number of function and constant symbol occurrences in the term under consideration, it will leave N choice points behind.) You could do worse than imitate their layout, however (you could imitate Bratko's layout; that would be much worse). ------------------------------ End of PROLOG Digest ********************