fs@caesar.cs.tulane.edu (Frank Silbermann) (12/20/90)
Surya Mantha: <663@rocksanne.WRC.XEROX.COM> mantha.wbst128@xerox.com > When is a syntax declarative? Some will call a language declarative if it has a declarative subset (regardless how weak :-) ). I consider a language declarative if programs can easily (to whom?) be viewed as assertions of fact, alternative to the conventional view that a program is a command. > Note that we talk about the declarative semantics > of Horn clauses (cut-free, first order restriction of Prolog). > In this case, the terms "denotational" and "declarative" > can be used interchangeably. > We, however, do not talk of the declarative semantics > of C or Pascal, for instance. For these languages, there is no clear declarative reading. That's why their denotational semantics are so complicated. With a referentially-transparant functional language (free of side-effects) you can argue that certain expressions denote functions, function arguments, function applications, and so on. One argues that the entire program can be viewed as a mathamatical assertion, and thus that the language is declarative. To be sure, the assertion that such-and-such expression _denotes_ a function is, by definition, a (possibly informal) claim about the language's _denotational_ semantics, and therefore, investigation into the denotational semantics is required to support such claims. Furthermore, until we define the mathamatical domains of discourse, we cannot even begin to speak about functions (which are, by definition, `1-to-1' and `onto' domain mappings). Thus the importance of domain theory. > In Mathematical Logic, primacy is given to model theory > We prove the *soundness* and *completeness* of the axioms > and inference rules of the proof theory wrt the model theory > (i.e. notions of validity -- true in all models and so forth). > For definite Horn clauses we have the machine independent > denotational semantics given by the minimal Herbrand model. > Then, the operational semantics via SLD-resolution > is proposed, and the soundness and completeness results follow. > This is in the general tradition of mathematical logic. > > Consider the analogous enterprise in programming language semantics. > Here, the idea is to show the *equivalence* between the operational > and denotational semantics of a language. > But the direction is reversed. > Some kind of a relation (either an equivalence relation > or a partial ordering) on program *behaviours* is assumed. > The models that we then build are said to be *adequate* > and/or *fully abstract*. The burden of *full abstraction* > or the stigma of being too *concrete* lies with the models > that are constructed to give the denotational semantics. > The operational semantics is, in some sense, > prior to the denotational semantics. > Why is it not the other way around? > Why don't we start with machine independent semantics > and then show that the corresponding operational models > are sound and complete with respect to them? Practical declarative language designs are limited to that which is reasonable for a computer to do, i.e., the operational semantics. Therefore, the operational semantics is thought about first. Once the denotational semantics are understood, however, _I_ believe we should make it normative, at least for the purpose of teaching the language. The theory of logic developed analogously. First came the "operational semantics", as ancient philosophers collected rules of inference. The model-theory was developed much later, as understanding increased. In today's logic programming community, the typical order is: 1) propose a useful extension to Prolog, giving the operational semantics; 2) ignore the model-theoretic implications of the new feature. :-) Frank Silbermann fs@cs.tulane.edu Tulane University New Orleans, Louisianna USA
mantha@mum.wrc.xerox.com (S. Mantha (co-op)) (12/21/90)
In article <5492@rex.cs.tulane.edu>, fs@caesar.cs.tulane.edu (Frank Silbermann) writes: |> |>With a referentially-transparant functional language |>(free of side-effects) you can argue that certain expressions |>denote functions, function arguments, function applications, and so on. |>One argues that the entire program can be viewed |>as a mathamatical assertion, and thus that the language is declarative. Are you saying -- please correct me if I am wrong -- that referential transparency (R.T.) is a *necessary* condition for "declarativeness"? I don't think that (that R.T. is a necessary condition for declarativeness) is a correct assertion. It is too strong a requirement to place upon the language. I agree that it is a sufficient condition for declarativeness. Consider intensional sentences such as "I believe that X" or "I know that Y". Any logic that models such sentences (and thus any language whose operational semantics mimics the proof theory for such a logic) violates the property of referential transparency. I would, however, not venture to claim that such a language was not declarative. I feel we need a sharper notion of what we mean by declarative programming. As far as functional languages (FL) are concerned, I agree with you that referential transparency is a key property. One compromises on the compositionality of FL by introducing even such seemingly benign entities as logical variables (consequently full unification). While one gets a richer notion of communication--synchronization (with the logical variables acting as channels of C and S) than in traditional functional languages and the ability to reason with partially instantiated data structures there are several disadvantages. Characterizing the behaviour of logical variables semantically is a problem. In the presence of "laziness", one has to deal very carefully with unification. The semantics should not only be bottom-avoiding, it should also be top(error)-avoiding. We (Gary Lindstrom and I) have, however, arrived at a somewhat satisfactory account of lazy functional languages with logical variables. We use the notion of Information Systems (introduced by Scott to give an alternative formulation of domain theory) to capture the notion of "constraint solving" inherent in unification. Instead of viewing programs as plain mathematical functions, we view them as entities that impose constraints on the values of logical variables in a *global* logical store -- a term introduced by Saraswat in his thesis. We can show compositionality (modulo the logical store). It seems plausible that such a model can be generalized to functional programming with other kinds of constraint solving, as in the Constraint Logic Programming model. "Laziness" would be of real importance as constraints would not have to be reduced to a normal form at each step of the computation. |>In today's |>logic programming community, the typical order is: |> |> 1) propose a useful extension to Prolog, |> giving the operational semantics; |> |> 2) ignore the model-theoretic implications of the new feature. |> :-) I couldn't agree with you more. There is another class of logic programmers who are exclusively engaged in the enterprise of providing "semantics" (on an incremental basis) but whose work has little computational content. The worst are those who palm off EXTRA-LOGICAL features as being *meta-logical*. cheers Surya Mantha University of Utah and Xerox, Webster Research Center
fs@rex.cs.tulane.edu (Frank Silbermann) (12/22/90)
>|>With a referentially-transparant functional language >|>(free of side-effects) you can argue that certain expressions >|>denote functions, function arguments, function applications, and so on. >|>One argues that the entire program can be viewed >|>as a mathamatical assertion, and thus that the language is declarative. In article <673@rocksanne.WRC.XEROX.COM> mantha.wbst128@xerox.com writes: >Surya Mantha > Are you saying -- please correct me if I am wrong > -- that referential transparency (R.T.) is a *necessary* > condition for "declarativeness"? Since declarativeness is not a mathematically-precise concept, I'd be a fool to take such a strong stand. Let's just say that the looser the dependence of an expression upon its context -- the more declarative. > Characterizing the behaviour of logical variables semantically > is a problem. It's not a problem for pure Prolog. It is a problem for Prolog with extra-logical features, however. > In the presence of "laziness", one has to deal > very carefully with unification. The semantics > should not only be bottom-avoiding, > it should also be top(error)-avoiding. I am revising for publication a paper by Bharat Jayaraman and myself which combines lazy (outermost reduction, to be precise) higher-order functional programming with 1st-order horn logic via relative set abstraction. Both operational and denotational semantics are based on conventional domain theory, (with a few run-time program transformations thrown in for efficiency). There is no top(error)-value in the language -- though certain operations will be explicitly undefined(bottom). Logical variables are not _explicit_ features of the language, but are introduced as an operational optimization to avoid blind enumeration of certain generator sets. > We (Gary Lindstrom and I) have, however, > arrived at a somewhat satisfactory account of > lazy functional languages with logical variables. > We use the notion of Information Systems > (introduced by Scott to give an alternative formulation > of domain theory) to capture the notion > of "constraint solving" inherent in unification. > Instead of viewing programs as plain mathematical functions, > we view them as entities that impose constraints > on the values of logical variables in a *global* logical store > -- a term introduced by Saraswat in his thesis. > We can show compositionality (modulo the logical store). I'd welcome seeing anything you've written. > The worst (of the logic programmers) are those > who palm off EXTRA-LOGICAL features as being > *meta-logical*. Even if a feature _is_ meta-logical -- doesn't that mean it belongs in a _meta-language_ (e.g. a language-oriented programmable environment)? Frank Silbermann fs@cs.tulane.edu Tulane University New Orleans, Louisianna USA