RESTIVO@SU-SCORE.ARPA (04/12/85)
From: Chuck Restivo (The Moderator) <PROLOG-REQUEST@SU-SCORE.ARPA> PROLOG Digest Friday, 12 Apr 1985 Volume 3 : Issue 18 Today's Topics: Puzzles - Maps, Implementations - Searching & Denotational Semantics, & Cuts & Snips & Control & C-Prolog Ports, LP-Library - Proceedings ---------------------------------------------------------------------- Date: Tue, 9 Apr 85 21:44:49 pst From: Allen VanGelder <AVG@Diablo> Subject: Shapiro's coloring program Shapiro's coloring program can be vastly improved in efficiency by the not-not trick in "subset". The modified rules are shown below, but first I am showing how to fix "map" to correct a typo in his original message. The not-not in "subset" avoids blindly instantiating the colors of adjacent countries. Someone who can profile might be interested in reporting performance (after the typo is fixed) of both methods. Two run-types should be compared: (1) time to first solution; (2) time of all 120 solutions. A space report would also be interesting. The number of solutions can be reduced to 30 by adding ":- B=blue" to the clause for "map". It is disturbing that one such "non-logical" trick can deliver huge performance gains over correct "pure Prolog" programs. /* __________ | a | |________| |b |c |d | |__|__|__| |e |f | |___|____| is represented by the term: */ map( [country(a,A,[B,C,D]), country(b,B,[A,C,E]), country(c,C,[A,B,D,E,F]), country(d,D,[A,C,F]), /* <---- fix */ country(e,E,[B,C,F]), country(f,F,[C,D,E]) ] ). subset([C|Cs],Colours) :- \+ \+ remove(C,Colours,_), /* <------- changed */ subset(Cs,Colours). subset([],_Colours). ------------------------------ Date: Tue, 9 Apr 85 10:18:34 bst From: William Clocksin <wfc%cl.cam@ucl-cs.arpa> Subject: Map coloring Ehud Shapiro's map colouring program reminds me of a program I wrote a few months ago. It performs greedy gate assignment of digital circuits. The representation of circuits is similar to the representation of countries, except that I use Prolog clause syntax instead of lists. So, the simple half-subtractor looks like: half_sub(In1,In2,Diff,Borr) :- xor(In1,In2,Diff), not(In1,T), and(In2,T,Borr). The only difference between this and the countries is that this can be nested to arbitrary depth. For example, the logic functions (actually relations) shown can be defined in terms of CMOS transistors, if you like. Anyway, the main part of greedy gate assignment is about three lines long, and looks similar to Shapiro's map colourer. Another program, only slightly more complicated, finds subcircuits isomorphic to some given ones, and rewrites them into other subcircuits. The experience, shared with Shapiro's map colourer, is that variables are very convenient for expressing adjacencies. ------------------------------ Date: Wed, 10 Apr 85 07:09:38 EST From: cugini@NBS-VMS Subject: Finding Best Solution In any situation where no more is known about p(X), other than that it generates all appropriate X's, in particular, where the order of generation is unrelated to the cost, then clearly (?) the best algorithm is proportional to the size of the set {p(X)}. A simple linear procedural solution is: low-cost := infinity set-of-best := null for all X s.t. P(X) if cost(X) < low-cost then set-of-best := {X} low-cost := cost(X) else if cost(X) = low-cost then set-of-best := set-of-best + {X} endif endif end for Is there any motivation for doing this in Prolog other than fun? A related issue is the usefulness of such a generic solution. As mentioned, I don't see how it could be better than linear relative to the size of the candidate set, {p(X)}, in the absence of other information about the way the solutions are ordered or constructed. Notice that in the case of the shortest-path problem for graphs, this approach quickly becomes unrealistic - it means generating *all* paths between the two specified vertices, and choosing the shortest of these. The number of paths is (I think) something like: avg-degree-of-graph ** number-of-vertices. The realistic algorithm involves building a kind of balanced tree outward from one of the nodes (balanced in the sense that all the reached nodes are at least as close to the origin node as any of the unreached, i.e. all partial paths built are optimal) until the other is reached. This ensures that the first path found is an optimal one - the other sub-optimal paths are never explicitly generated and examined. This algorithm runs, I think in O(number-of-vertices ** 2). The point here is that for "large" search spaces, one would almost surely need/want to take advantage of domain-specific knowledge in order to prune the search. -- John Cugini ------------------------------ Date: Tue, 9 Apr 85 13:12:11 pst From: Allen VanGelder <AVG@Diablo> Subject: Searching One straightforward solution to Hilfinger's problem is: best(X) :- p(X), cost(X, N), \+ (p(Y), cost(Y, M), M < N). This satisfies his criteria, I guess, but is certainly not very efficient compared to: best(X) :- bagof(c(Y, M), (p(Y), cost(Y, M), Cs), mincost(Cs, N), member(c(X, N), Cs). where mincost has an obvious definition and succeeds exactly once, and member has the usual definition. The stigma attached to bagof and findall is misguided in my opinion. There are many common operations that are known not to be expressible AT ALL by first-order operations. Counting is the canonical example. (*) Others, like this example are an order of magnitude more efficient with setof, bagof, or findall. Why use a grossly inefficient method like the first rule given? * Actually it is possible to count how many times p(X) succeeds by a tortuous first-order program by using the fact that Prolog supplies a built-in predicate that orders all terms. Don't ask me for details; look in the literature. Immerman in STOC 82 is a starting place. ------------------------------ Date: Tue 9 Apr 85 17:23:32-MST From: Uday Reddy <U-REDDY@UTAH-20.ARPA> Subject: Semantics - Initial models Oops! I should have qualified my statement about combinatorial complexity of equality in initial model-based languages. That is necessary only if the implementation wants to deal with nontermination. Most implementations of initial model-based languages get around the problem by assuming that the programs are strongly terminating (all reduction sequences are finite). The implementations then disagree with the initial model for the nonterminating case. Note that if-then-else has to be a lazy function and so violates the assumption of strong termination. Special treatment is required to deal with conditional equations. The domain-theoretic least model corresponds to an efficient implementation even for the nonterminating case. We are really comparing theories with different strengths here. Initial models explain well general equational programs of first order. Domain theory is used for fixed point equations of the form f = t(f). These can be added syntactic sugar to get constructor- based equational languages. But, more general equations (e.g. associativity that Goguen pointed earlier) cannot be dealt with. On the positive side, domain theory can easily deal with lazy and higher order functions. Evidently, there are good research problems here in bringing the two theories together. Let me also add that this has no relevance to Horn clause languages without equality (e.g. Prolog) for which initial models and least models are the same. -- Uday Reddy ------------------------------ Date: 3 Apr 85 01:19:37 GMT From: (David Powers) decvax!mulga!elecvax.oz!DavidP@Berkeley Subject: "soft" cuts Coming in on a response to a query I missed concerning "soft" cuts: I added such a construct to UNSW PROLOG in 1982 - it is a very simple change, related closely to traditional cut. I used the symbol "#", hash, and the term "half cut". An example of its use compared with alternative ways of achieving the same end is given in Chapter 5, pp19-28 of UNSW DCS Technical Report 8313, D.M.W.Powers and G.B.McMahon, "A Compendium of Interesting PROLOG programs" (December 1983). The advantage of half cut is that it allows obtaining all solutions, if any exist, and execution of an alternative strategy or error routine otherwise. I have not found any markedly different application. My original application was natural language parsing/learning. This half cut cannot be simulated with Horn + cut (without assert/bagof/...), but the other half - commit to current track within clause but not to current clause - can be simulated with cut alone. I am happy to look out the specific mods to UNSW PROLOG for those who request them. -- David M. W. Powers ------------------------------ Date: 10 Apr 1985 11:57-EST From: Saumya Debray <debray%suny-sb.csnet@csnet-relay.arpa> Subject: snips; cut on the WPE; static control of backtracking Keith Hughes reported a construct, "snip", which permits more flexible cutting. We've used something similar in our implementation of D H D Warren's Abstract Prolog Machine last summer. Our idea (originating with David S. Warren here at Stony Brook) is to introduce system built-in predicates, savecp(X), cutto(X) and cutout(X). These are low-level primitives unavailable to the user (we don't trust him, as you can see :-)), and are introduced by our compiler. They work as follows: savecp(X) stores the address of the current choice point in X; cutto(X) makes the current choice point to be the same as that stored in X; and cutout(X) zaps the choice point in X to nil. With this, Hughes' snipped clause p :- q1, q2, [!, q3, q4, !], q5. is equivalent to the clause p :- q1, q2, savecp(X), q3, q4, cutto(X), q5. where X is a variable that doesn't occur anywhere except in the savecp/cutto pair. An ordinary cut is translated as follows: p(X_1, ..., X_n) :- q1, !, q2. is transformed to p(X_1, ..., X_n) :- savecp(X_n+1), p1(X_1, ..., X_n, X_n+1). p1(X_1, ..., X_n, X_n+1) :- q1, cutto(X_n+1), q2. where p1 is a new predicate, and X_n+1 a new variable. Our register allocation algorithms eliminate practically all the overhead that might be expected with the extra call. Things get a little more interesting when these primitives are used for static pruning of Prolog's search tree (runtime intelligent backtracking incurs such tremendous overhead that it often isn't worth it). The idea is to isolate independent goals via compile-time analysis and use our primitives to control backtracking: e.g. with p(X,Y,Z) :- q(X), r(Y), s(X,Z). if it can be shown that the goals r and s will never share a variable at runtime, then this can be transformed by the compiler to p(X,Y,Z) :- q(X), savecp(V1), r(Y), ((savecp(V2), s(X,Z), cutout(V2)) ; (cutto(V1), fail) ). Here, the goal "savecp(V2)" stores the choice point for the disjunction (';'). If execution succeeds past "s(X,Z)", then the "cutout(V2)" zaps the choice point for the disjunction; then, if execution ever backtracks to this point, the other disjunctive branch ("cutto(V1),fail") is not executed, and execution can backtrack to the goal "r(Y)". On the other hand, if the goal "s(X,Z)" fails, then the other branch of the disjunction is executed, the current choice point reset to V1 (which is before the call to r), and the "fail" then causes execution to backtrack to "q(X)", rather than to "r(Y)". This is similar to D.H.D. Warren's independent-subgoal-processing in Chat-80, except that the analysis is a lot more complicated because of the possibility of partially instantiated structures. There are logically correct programs which do not terminate with Prolog's naive backtracking strategy, as the following example (due to Jieh Hsiang) shows: between(X,Y,Z) :- int(X), int(Y), int(Z), gt(X,Y), gt(Z,X). int(0). int(X+1) :- int(X). gt(X,Y) :- X1 is X, Y1 is Y, X1 > Y1. :- between(0,0+1,Z). However, the goals "int(Z)" and "gt(X,Y)" in the definition of between/3 are independent, and, when transformed as above, behaves correctly. In general, our strategy does not detect all possible intelligent backtrack points. However, the fact that the analysis is all done statically means that the runtime overhead is practically nonexistent. - Saumya Debray SUNY at Stony Brook ------------------------------ Date: 11 Apr 85 23:37:18 +1000 (Thu) From: decvax!mulga!mungunni.oz!lee@Berkeley Subject: Control I think this is rather unfair. If you change a goto in a FORTRAN program, the chances are, you will get a different answer out of it. In (pure) PROLOG, if you change the computation rule, the answer(s) remain the same, since SLD resolution is complete(*). There really should be more distinction between forms of "control" which can affect answers (like cut) and those which cant (like wait declarations and freeze). Uday's points are only valid for the former category. (*) Of course, many people have pointed out that PROLOG is *not* complete, since it uses a depth first search strategy. However, it does have a weaker form of completeness: if the query evaluation terminates then all solutions are guaranteed to have been found. This is all that is needed in most applications. Its important that your logic programming system can tell you when it has found all solutions (if any) to your query, rather than looping indefinitely. No system can prevent all infinite loops, so we must rely on people to write programs which terminate and so the weaker form of completeness is OK. -- Lee Naish ------------------------------ Date: Mon, 8 Apr 85 15:14 EST From: Mark Beutnagel <Beutnagel%upenn.csnet@csnet-relay.arpa> Subject: C-Prolog Port Notes C-Prolog port notes University of Pennsylvania HP 9000 Series 200 workstation HP-UX 2.1 operating system Mark Beutnagel, Spring 1985 1. Version 1.5 copied from Vax (unix) to HP via Kermit; the C-Prolog source directory and the pl subdirectory are needed. 2. Include file sys/types.h contained a typedef Unsigned which conflicted with the macro Unsigned used by C-Prolog. A clean copy of types.h was placed in the C-Prolog directory and sysbits.c (~line 35) was changed to use the local types.h. 3. Changes to the makefile: a. -Dunix added compile flags b. float type = IEEE c. EOF = null entry (not AsDEC10) 4. Check location and name of the startup file; change in the makefile (or in parms.c) if not suitable. 5. The error message "Not a typewriter" appears when states are 'saved', but restored states >>seem<< to work anyway. This could be a bug in HP-UX. It probably results from some piece of ioctl wanting a screen but sent to the state file. ------------------------------ Date: 11 Apr 85 10:03:32 +1000 (Thu) From: Isaac Balbin <munnari!mungunni.oz!isaac@seismo.ARPA> Subject: C-Prolog Ports In case anyone is interested I have done ports of 1.4d (plus extra fixes from Richard O'Keefe) - I am unsure how this differs with 1.5 - to (1) a Perkin Elmer 3240 running (almost) Unix 4.2BSD, (2) an ELXSI System 6400 (64 bit super-computer) this is a SysV port of Unix to a multi processor, which during the Beta Test period ran only ~ 2.3 times the speed of a Vax 11/780 (4.2BSD). (3) A Pyramid 90x which ran at ~ 1.3 times the Vax 11/780. Note my timings were done using the standard (but controversial) nrev. The fixes usually had to do with bit twiddling and masking. Some nice tricks involving double casts were used. I expect they will at least also be necessary for 1.5. If anyone wants details please mail me. As for trouble with the 68000 Port regarding the 'not a typewriter' message, I vaguely remember the same problem with 1.4d running on 4.2 and solving it in sysbits.c by changing the 'fancy' IsaTty assignment back to IsaTty = isatty(fileno(stdin)); because isaTty *does* work (at least now, with pipes etc). PS: If anyone at Edcaad is listening could you send us a tape of 1.5? -- Isaac ------------------------------ Date: 11 Apr 85 1225 PST From: Yoni Malachi <YM@SU-AI.ARPA> Subject: Lisp conferences proceedings [from SIGACT News] ACM SIGPLAN has republished the conference proceeding of previous Lisp conferences. Price Order No. Members Others 1980 Lisp Conf. 552800 $15 $21 1982 Lisp Conf. 552820 $18 $26 1984 Lisp Conf. 552840 $20 $27 Ordering address (prepaid) ACM Order Dept. P.O.Box 64145 Baltimore, MD 21264 ------------------------------ End of PROLOG Digest ********************