vantreeck@curie.dec.com (11/18/88)
>Path: decwrl!sun!pitstop!sundc!seismo!uunet!munnari!vuwcomp!lindsay >Subject: Re: Determining order of argument unification >I have often wondered about this. It would seem to be a fairly simple >optimisation in a Prolog compiler to look at the arguments in a call and >process them in an order that does the easiest things first. For example: . . . > Lindsay Groves Most PROLOG interpreters/compilers are optimized for deterministic execution of very small procedures, in particular, naive reverse which has only two clauses and calls another procedure, append, which also has only two clauses. For this case, the search time is extremely small. And special optimizations for order of search are unnecessary. These implementations have added indexing on the first argument in the head of a clause as an after thought to address real-world programs, where searching is generally the dominant time consumer. I implemented a research version of PROLOG that supports multiple levels of indexing on all arguments and subarguments. But this is space expensive, so I chose to eliminate the linked list that preserved order of clause assertion. And I chose to use a search paradigm analogous to other languages: Search for the specific case first, and then try the catch-all case. Therefore, this PROLOG first unifies a goal variable to any constants, then structures, then non-void variables, and finally void variables -- almost exactly like the paradigm you suggested. I added a hack for those less frequent (but important) cases where order of clauses is important. Not that anyone would want yet another "non-standard" PROLOG, this PROLOG is not available to the public. George Van Treeck Digital Equipment Corporation >Can somebody point to a reference for "iterative deepening"? >I have looked in the indexes of a half-dozen or so standard AI textbooks >and found nothing. I know this is a term and a search strategy >that Richard discusses. Richard, did you come up with the name? >If not, do you know who did, or where the strategy is referred to >as such (other than your messages and tutorial notes)? The reference for Depth-First Iterative-Deepening (DFID) is in the Journal of Artificial Intelligence 27 (1985) pp. 97-109. DFID asymptotes to the optimal space and optimal time algorithm for large search spaces. When I suggested using DFID in PROLOG several years ago, the consensus on the USENET was that the number of cases where it would be useful are too small to put a lot implementation study into it. Examples of useful applications are finding: a chemical synthesis paths, mathematical proofs, optimal electronic circuit synthesis, little reasoning problems like the monkey and bananas. There may not be a lot of cases where it's important. But when it is needed it's very handy. I've spent some time looking at ways to build DFID search efficiently into PROLOG. The primary problem is that PROLOG programs have side-effects which aren't reversible on backtracking (e.g., keyboard input). DFID does a lot more backtracking where "normal" PROLOG would be deterministic. Ideally, one would like to have a DFID predicate for calling PROLOG procedures which no side-effects in their proof. And ideally, the overhead of checking depth shouldn't impact PROLOG execution in general. So, I've been looking at ways to break DFID execution out into a separate case from normal DF search (much as most commercial versions of PROLOG that are based on the WAM separate out the read and write modes). It may also be that with DFID one would need to use coroutining predicates less often in order to determine constraints. That would help remove some of the requirement to think about implementation and concentrate on the declarative statement of the problem. George Van Treeck Digital Equipment Corporation
ok@quintus.uucp (Richard A. O'Keefe) (11/18/88)
In article <8811171657.AA18741@decwrl.dec.com> vantreeck@curie.dec.com writes: >>Path: decwrl!sun!pitstop!sundc!seismo!uunet!munnari!vuwcomp!lindsay >>Subject: Re: Determining order of argument unification > >>I have often wondered about this. It would seem to be a fairly simple >>optimisation in a Prolog compiler to look at the arguments in a call and >>process them in an order that does the easiest things first. For example: >> Lindsay Groves Wayne Citrin wrote a paper at UC Berkeley in Spring 1985: A Comparison of Indexing and Term Reordering: Two Methods for Speeding Up the Execution of Prolog Programs. For the things he looked at, term reordering over and above what the WAM does rarely paid off. The normal WAM order is to work from left to right in a term, handling constants and variables as you find them, but postponing compound terms. The exception to this is that if the last argument of a term is compound, it is processed straight away, before any preceding compound arguments. E.g. f(g(X),a,h(X),Y,g(Y)) does a, Y, g(Y), g(X), h(X) in that order. >These implementations have added indexing on the first >argument in the head of a clause as an after thought to address real-world >programs, where searching is generally the dominant time consumer. No, first-argument indexing is NOT an afterthought. It is a vital part of the basic machinery, analogous to CASE statements in Pascal. Indexing tells you two things: these clauses are relevant (reducing search) and these are the ONLY relevant clauses (determinacy). >I've spent some time looking at ways to build DFID search efficiently into >PROLOG. The primary problem is that PROLOG programs have side-effects which >aren't reversible on backtracking (e.g., keyboard input). DFID does a lot more >backtracking where "normal" PROLOG would be deterministic. I'm afraid I've not seen anything by Van Treeck on this. I got the idea from Mark Stickel, who has implemented it as part of his "Prolog Technology Theorem Prover". Stickel points out that you need three things: (1) Sound unification (easy to add to the WAM) (2) A fair searching strategy (iterative deepening is dead simple to add) (3) Resolution against opposite sign ancestors to handle negation One of the handouts in our "advanced Prolog course" is a tiny little term_expansion/2 definition which provides iterative deepening by source- to-source transformation. That seems to be efficient enough for most purposes.