yfeng@watnow.waterloo.edu (Feng Yang) (11/06/90)
Hi there, I am novice in Logic programming. If the following questions are too naive to put on the net, please forget me. (1) Why does prolog use depth-first, left-to-right search ? I understand that this approach probably requires least memory, but it also gives a lot of troubles such that two logical equivalent program are not always equivalent in prolog. e.g the well know "repeat": repeat. repeat :- repeat. will alway suceed, while repeat :- repeat. repeat. will not. This kind of problems can be partially remedied by just use wedth-first search ! Any other justifications except the memory consideration ? (2). I would like to know any work on proving two prolog program eqivalent (or not), or one program is subset of another. Thank you very much. Feng -- |---------------------------------------------------------------------------| | Feng Yang | |---------------------------------------------------------------------------|
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (11/06/90)
In article <1990Nov5.182613.1571@watserv1.waterloo.edu>, yfeng@watnow.waterloo.edu (Feng Yang) writes: > (1) Why does prolog use depth-first, left-to-right search ? When you come right down to it, the answer is "because THAT is what gave us the procedural interpretation of Prolog in the first place", that is precisely the thing that made Prolog usable as a programming language. > but it also gives a lot of troubles such that two logical equivalent > program are not always equivalent in prolog. e.g the well know "repeat": I don't know any programming language in which it is impossible to find logically equivalent programs which are behaviourally different (at least in the sense of taking greately different amounts of time and/or space). (Yes, I do know about lazy functional languages. They're the ones I had in mind where it's easy to find gross cost differences between equivalent programs.) > Any other justifications except the memory consideration ? Breadth-first search can require EXPONENTIALLY more memory than depth-first search, to the point that breadth-first search on non-trivial problems is almost always completely infeasible. We are talking about ENORMOUS differences here. On the other hand, it is trivially easy to do iterative deepening in Prolog (see "The Craft of Prolog" plug plug), which gives you the safety of breadth-first search with the efficiency of depth-first. -- The problem about real life is that moving one's knight to QB3 may always be replied to with a lob across the net. --Alasdair Macintyre.
jha@cs.ed.ac.uk (Jamie Andrews) (11/07/90)
[Disclaimer: No, I didn't prompt Feng Yang to post the question! :-)] In article <1990Nov5.182613.1571@watserv1.waterloo.edu> yfeng@watnow.waterloo.edu (Feng Yang) writes: >(1) Why does prolog use depth-first, left-to-right search ? There seem to be a few reasons. o The efficiency considerations you mention. A breadth-first algorithm seems, for many problems, to take up too much extra space, and extra time spent switching from goal to goal. o Depth-first interpreters can be easier to write and make efficient. o Depth-first, left-to-right search has a relatively simple execution model which is relatively easy for programmers to use when debugging programs. Debugging a program on a breadth-first system seems to involve keeping track of more information at one time. o Another debugging consideration is that with a breadth-first interpreter, changes in processor load and/or minor changes in the program may affect the order in which solutions are returned. This can be very confusing for debugging purposes. o The Prolog strategy seems to be "not incomplete enough" for many problems to warrant switching to breadth-first. There are certainly examples (such as you give) and problem domains in which parallel interpreters are desirable, but these seem to be in the minority. >(2). I would like to know any work on proving two prolog program >eqivalent (or not), or one program is subset of another. You could look at my paper in NACLP'90, "The Logical Structure of Sequential Prolog". This gives proof systems for proving some properties of logic programs. I don't think those systems would be very useful for proving equivalences between programs, but the sequent calculus version of it (forthcoming) might be. -- --Jamie. Original material copyright (c) 1990 by Jamie Andrews; jha@lfcs.ed.ac.uk for distribution only on unmoderated USENET newsgroups. "Autumn, to me the most congenial of seasons; the university, to me the most congenial of lives" --Robertson Davies
powers@uklirb.informatik.uni-kl.de (David Powers ) (11/09/90)
ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes: >In article <1990Nov5.182613.1571@watserv1.waterloo.edu>, yfeng@watnow.waterloo.edu (Feng Yang) writes: >> (1) Why does prolog use depth-first, left-to-right search ? >When you come right down to it, the answer is "because THAT is what >gave us the procedural interpretation of Prolog in the first place", >that is precisely the thing that made Prolog usable as a programming >language. YES and NO! It is not the only possible control structure that allows a procedural interpretation, and it is not even the best. It was one of the simplest to implement and it does admit significant optimization for efficient execution. In fact there are control strategies which for many programs can execute with a number of inference steps logarithmic (or better) in that taken by Prolog (see my (Powers) CONG work, or Tarnlund et al's REFORM work). Kowalski defined Logic Programming as "Logic + Control". But SOMETIMES the control requirements become too much in PROLOG and lead to obscuring of the declarative semantics, simulation of another control structure, etc. This negates the advantages of compilation and optimization IN THESE CASES. >> but it also gives a lot of troubles such that two logical equivalent >> program are not always equivalent in prolog. e.g the well know "repeat": >I don't know any programming language in which it is impossible to find >logically equivalent programs which are behaviourally different (at least >in the sense of taking greately different amounts of time and/or space). >(Yes, I do know about lazy functional languages. They're the ones I had >in mind where it's easy to find gross cost differences between equivalent >programs.) Logical equivalence is very broad and it is always possible to add work which will impact efficiency but not results. Executing a cut-free Prolog program in non-SLD ways will natural be influenced by search order. (SLD is the top-down left-to-right PROLOG control.) Research is proceeding steadily on ways of better executing declaratively nice programs with unfortunate SLD behaviour. This includes work by Naish with delays; Warren and Pereira with goal reordering; Verschaetse, De Schreye and Bruynooghe with manual or automatic selection of an alternative strategy (by rewriting before executing with SLD), and my (Powers) work on programming in a theorem prover guided by heuristics. In particular it is possible for such techniques to eliminate the pathological and undesirable executions for at least some classes of examples. >> Any other justifications except the memory consideration ? >Breadth-first search can require EXPONENTIALLY more memory than >depth-first search, to the point that breadth-first search on non-trivial >problems is almost always completely infeasible. We are talking about >ENORMOUS differences here. >On the other hand, it is trivially easy to do iterative deepening in >Prolog (see "The Craft of Prolog" plug plug), which gives you the safety >of breadth-first search with the efficiency of depth-first. Iterative deepening is a useful compromise. But it should also be noted that the characterization "breadth-first" admits some variations. The search "tree" has actually two dimensions. These correspond to the "left-to-right" and the "top-down" of SLD - the AND dimension of working on a consistent set of subgoals in some order, and the OR dimension of working on the alternate elaborations of a subgoal in some order. The possibilities for breadth-first search correspond to those of Parallel Logic Programming known as AND-parallelism and OR-parallelism (or the combination thereof). A "usual" definition of breadth-first is that new subgoals are added at the end of a QUEUE rather than the beginning of a STACK. This corresponds to the AND dimension and CAN still be implemented with a form of backtracking obviating a space explosion. The replacement of OR backtracking by elaborating a disjunctive set of bindings or constraints can be implemented independently of the above sort of breadth-first search. A full breadth-first search can require exponential space in the length of the "proof". The time for a cut-free program will not necessarily be worse as the whole search space will be explored. But the addition of an appropriate (e.g. oracular) control heuristic can bring EITHER back to linear. But I reiterate, there are yet other control strategies. I think the aim of Logic Programming should be to increasingly give the programmer the freedom to progam declaratively without having to worry about control, and to allow the intransigence to be handled by high level introduction of heuristics. Logic Programs without a great deal of blind search should be able to have their control supplied efficiently and cheaply through analysis. Artificial Intelligence applications, which can really make use of the fact that we actually have a Theorem Prover at our disposal, should perhaps rely on a new maxim: Artificial Intelligence = Logic + Heuristics ------------------------------------------------------------------------ David Powers +49-631/205-3449 (Uni); +49-631/205-3200 (Fax) FB Informatik powers@informatik.uni-kl.de; +49-631/13786 (Prv) Univ Kaiserslautern * COMPULOG - Language and Logic 6750 KAISERSLAUTERN * MARPIA - Parallel Logic Programming WEST GERMANY * STANLIE - Natural Language Learning Riddle: What is the difference between the university and me. Disclaimer: My opinion.