narain@randvax.UUCP (Sanjai Narain) (05/26/90)
Of course, the difficulty of reasoning about non-deterministic algorithms depends upon the algorithm. What I said was that by expressing these in logic, one can reason about them the same way as one reasons about other sentences in logic. For example, it is quite easy to show that the following non-deterministic program: subset([],[]). subset([U|V],Z):-subset(V,Z). subset([U|V],[U|Z]):-subset(V,Z). will compute exactly 2**n subsets of a set of length n (represented as a list), and whose complexity is O(2**n) SLD-resolution steps. An interesting question is this: why is it that algorithmic knowledge can be expressed using definite clauses, but not using full first-order logic (at least in any obvious way)? Other logics in which one can express algorithmic knowledge are equational logic, lambda calculus, and combinatory logic. Sanjai Narain
jah@munginya.cs.mu.OZ.AU (James Harland) (05/27/90)
In article <2554@randvax.UUCP>, narain@randvax.UUCP (Sanjai Narain) writes: [stuff deleted] > An interesting question is this: why is it that algorithmic knowledge > can be expressed using definite clauses, but not using full first-order > logic (at least in any obvious way)? A subject close to my heart. In brief, I think the answer is that the restriction to definite formulae is strong enough to ensure the completeness of a special kind of proof ie those found by SLD-resolution. The close association between the proof search process and computation (a la the original Kowalski scheme) doesn't seem to hold for arbitrary proofs in full first-order logic, for various technical and pseudo-technical reasons, and hence full first-order logic is not as suitable for programming purposes as definite clauses. In order to extend the class of formulae which can be used as programs, it is important to preserve the strong proof properties of definite clauses. Precise definitions of these properties may differ, but one interesting one studied by Dale Miller et al. is the class of uniform proofs. An in-depth discussion is probably a bit long for this forum, but this class of proofs leads naturally to a class of formulae which generalise definite clauses to a larger subset of first-order logic, known as hereditary Harrop formulae. Uniform proofs are sound and complete for this class of formulae, and hence such formulae may be used as logic programs. It is an informal conjecture that this class of formulae form the "ultimate" first-order logic programming language -- at least as far as uniform proofs are concerned. I am currently writing a paper on some ways to make some formal connection of this kind, ie roughly that there is no class of formulae which properly includes hereditary Harrop formulae for which the completeness of uniform proofs can be guaranteed. Hope this helps. Cheers, James Harland.
blenko-tom@CS.YALE.EDU (Tom Blenko) (05/29/90)
In article <2554@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) writes: |Of course, the difficulty of reasoning about non-deterministic algorithms |depends upon the algorithm. What I said was that by expressing these |in logic, one can reason about them the same way as one reasons about |other sentences in logic. For example, it is quite easy to show that |the following non-deterministic program: | | subset([],[]). | subset([U|V],Z):-subset(V,Z). | subset([U|V],[U|Z]):-subset(V,Z). | |will compute exactly 2**n subsets of a set of length n (represented as |a list), and whose complexity is O(2**n) SLD-resolution steps. I don't understand. I don't expect this program to terminate for the query subset(X,1) so perhaps it is not so easy to show? Or if so, how? Tom
narain@randvax.UUCP (Sanjai Narain) (05/31/90)
I thank James Harland for a very interesting response to my query. Such ideas are sometimes obscured by a frequent interpretation of "declarative programming" or "executable specifications", i.e. where one only has to specify what the problem is, and not how it is to be solved. In a sense, an executable specification is quite easy to obtain. Suppose one can specify a problem in a formal language. Then, one can use any complete theorem prover for it to query that specification. So, what is to distinguish doing this in full first-order logic from doing the same in definite clauses (with SLD-resolution)? It is the possibility of specifying algorithmic knowledge in the latter. Often, when people are told about the merits of Prolog, especially its logical basis, they assume they are relieved of the responsibility to specify algorithmic knowledge. Then, they notice unpleasant aspects e.g. frequent inability to run programs backwards, having to worry about ordering of goals, search for proof by backtracking, absence of logical negation. Finally, they say, "Oh Prolog is just another programming language. We still have to worry about *how* to solve a problem". Of course this is true! But the interesting aspect is that Prolog allows one to think about algorithms in a logical manner (in addition to offering good expressive power). The procedural interpretation of a logic is a very interesting idea. Perhaps equally interesting is a logical interpretation of a procedure. Sanjai Narain
narain@randvax.UUCP (Sanjai Narain) (05/31/90)
# | # | subset([],[]). # | subset([U|V],Z):-subset(V,Z). # | subset([U|V],[U|Z]):-subset(V,Z). # | # |will compute exactly 2**n subsets of a set of length n (represented as # |a list), and whose complexity is O(2**n) SLD-resolution steps. # # I don't understand. I don't expect this program to terminate for the # query # # subset(X,1) # # so perhaps it is not so easy to show? Or if so, how? # # Tom The SLD-search tree for the query subset([a1,..,an],Z), Z a variable, will consist of O(2**n) branches. Let this number of branches be F(n). Then, F(n+1)=2*F(n)+2. This is because the query subset([a1,..,an+1],Z) yields two branches, each ending in subset([a1,..,an]). Solving this recurrence yields F(n) = 3*2**n - 2. Correctness can be proved similarly, in particular, by observing that each statement is a correct fact about the subset relation. Sanjai Narain
lee@munnari.oz.au (Lee Naish) (06/01/90)
In article <2562@randvax.UUCP> narain@randvax.UUCP (Sanjai Narain) writes: > >In a sense, an executable specification is quite easy to obtain. Suppose >one can specify a problem in a formal language. Then, one can use any >complete theorem prover for it to query that specification. So, what is >to distinguish doing this in full first-order logic from doing the same in >definite clauses (with SLD-resolution)? It is the possibility of >specifying algorithmic knowledge in the latter. I am somewhat surprised that no-one has yet (to my knowledge) produced a logic program development/teaching environment which includes a decent complete theorem prover (preferably connected to a declarative debugging system). It seems like such an obvious thing to do. It allows you to to first get the logic right, then concentrate on the algorithm, using SLDNF resolution. I think this would be particularly nice for teaching purposes. We could first teach students to write specifications in logic, then explain SLDNF resolution (so they could write Prolog code which does not loop), then explain Prolog implementation (so they could write tail recursive code, not leave choice points around, etc). The testing facility in the NU-Prolog Debugging Environment goes a little way in this direction (it has a meta interpreter which uses a fair search strategy, and is connected to a declarative debugger), but the technology exists to do much better. Logic programming has a huge potential for great programming environments. Its a great shame that so few people are working on the area. lee
jah@munginya.cs.mu.OZ.AU (James Harland) (06/05/90)
In article <2562@randvax.UUCP>, narain@randvax.UUCP (Sanjai Narain) writes: > > In a sense, an executable specification is quite easy to obtain. Suppose > one can specify a problem in a formal language. Then, one can use any > complete theorem prover for it to query that specification. So, what is > to distinguish doing this in full first-order logic from doing the same in > definite clauses (with SLD-resolution)? It is the possibility of > specifying algorithmic knowledge in the latter. True. An interesting point to note is that for uniform proofs, one only needs to consider sub-formulae of the goal in order to find proofs, and so each of the logical connectives corresponds to an operation in a search space. Hence, one can interpret the logical connectives as some kind of instruction. This, I think, is getting closer to what you describe as algorithmic knowledge above. > Often, when people are told about the merits of Prolog, especially its > logical basis, they assume they are relieved of the responsibility to > specify algorithmic knowledge. [more] I think this is due to the way that a successful SLD-derivation can be interepreted as a proof in first-order logic. That does not mean that a similar proof has a successful SLD-derivation. For example, if the goal A,B succeeds in Prolog, then we may interpret this as a proof of A \land B, but that does not necessarily mean that the goal B,A will succeed, as it may loop. Hence, the interpretation is "one-way". However, there are some cases in which we can reverse the direction -- ie when the program does not rely on a particular computation rule and ordering of clauses. Thus if we rely only on the existence of a proof, rather than the manner in which it is found, then programmers are relieved of the responsibility to specify algorithmic knowledge. Otherwise, if we want a stronger result, we need to do more work. As ever, it is all a trade-off. > But the interesting aspect is that Prolog allows one > to think about algorithms in a logical manner (in addition to offering > good expressive power). This is the part that should be stresses, I think, rather than thinking of Prolog as programming in logic. Cheers, James.