mantha@mum.wrc.xerox.com (S. Mantha (Visiting Scientist)) (12/19/90)
This is a question to the semanticists on this newsgroup. What follows is my understanding of the subject (so, it may be all wrong!) In Mathematical Logic, primacy is given to model theory in the following sense. We prove the *soundness* and *completeness* of the axioms and inference rules of the proof theory with respect to the model theory (i.e. notions of validity -- true in all models and so forth). Consider the analogous enterprise in programming language semantics. Here, the idea is to show the *equivalence* between the operational and denotational semantics of a language (the so called full abstraction property). But the direction is reversed. Some kind of a relation (either an equivalence relation or a partial ordering) on program *behaviours* is assumed. The models that we then build are said to be *adequate* and/or *fully abstract*. The burden of *full abstraction* or the stigma of being too *concrete* lies with the models that are constructed to give the denotational semantics. The operational semantics is, in some sense, prior to the denotational semantics. Logic programming is different in this respect. For definite Horn clauses we have the machine independent denotational semantics given by the minimal Herbrand model (let us ignore negation for the moment). The operational semantics is given by SLD-resolution and the usual soundness and completeness results follow. This is in the general tradition of mathematical logic. Why is it not true of other programming languages? Why don't we start with machine independent semantics and then show that the corresponding operational models are sound and complete with respect to them? A colleague responded in the following way: In computing, you can't question the physics. If your machine tells you "it is so" then it is so. You can't argue with that. And thus the notion that our model theory has to be, at least, *adequate* (the terminology itself suggests the inferior status of semantics). A related question: When is a syntax declarative? Note that we talk about the declarative semantics of Horn clauses (cut-free, first order restriction of Prolog). In this case, the terms "denotational" and "declarative" can be used interchangeably. The denotations of the predicates are given by the least Herbrand model and those of the terms by the Herbrand interpretation (term algebra with free constructors). We, however, do not talk of the declarative semantics of C or Pascal, for instance. It is another matter that these languages can be given denotational semantics. I could think of the following: While the denotational semantics of C, Pascal or assembly is completely machine independent, the *construction* of the denotation itself depends heavily on the operational details of some abstract machine -- and is fact little more than the composition of a sequence of state transition functions. In *declarative* languages, on the other hand, the denotation lies on the surface of the syntax and is very easy to comprehend. So, what makes a language declarative? cheers Surya Mantha University of Utah and Xerox, Webster Research Center.
mantha@mum.wrc.xerox.com (S. Mantha (Visiting Scientist)) (12/19/90)
This is a question to the semanticists on this newsgroup. What follows is my understanding of the subject (so, it may be all wrong!) In Mathematical Logic, primacy is given to model theory in the following sense. We prove the *soundness* and *completeness* of the axioms and inference rules of the proof theory with respect to the model theory (i.e. notions of validity -- true in all models and so forth). Consider the analogous enterprise in programming language semantics. Here, the idea is to show the *equivalence* between the operational and denotational semantics of a language (the so called full abstraction property). But the direction is reversed. Some kind of a relation (either an equivalence relation or a partial ordering) on program *behaviours* is assumed. The models that we then build are said to be *adequate* and/or *fully abstract*. The burden of *full abstraction* or the stigma of being too *concrete* lies with the models that are constructed to give the denotational semantics. The operational semantics is, in some sense, prior to the denotational semantics. Logic programming is different in this respect. For definite Horn clauses we have the machine independent denotational semantics given by the minimal Herbrand model (let us ignore negation for the moment). The operational semantics is given by SLD-resolution and the usual soundness and completeness results follow. This is in the general tradition of mathematical logic. Why is it not true of other programming languages? Why don't we start with machine independent semantics and then show that the corresponding operational models are sound and complete with respect to them? A colleague responded in the following way: In computing, you can't question the physics. If your machine tells you "it is so" then it is so. You can't argue with that. And thus the notion that our model theory has to be, at least, *adequate* (the terminology itself suggests an inferior status to semantics). A related question: When is a syntax declarative? Note that we talk about the declarative semantics of Horn clauses (cut-free, first order restriction of Prolog). In this case, the terms "denotational" and "declarative" can be used interchangeably. The denotations of the predicates are given by the least Herbrand model and those of the terms by the Herbrand interpretation (term algebra with free constructors). We, however, do not talk of the declarative semantics of C or Pascal, for instance. It is another matter that these languages can be given denotational semantics. I could think of the following: While the denotational semantics of C, Pascal or assembly is completely machine independent, the *construction* of the denotation itself depends heavily on the operational details of some abstract machine -- and is fact little more than the composition of a sequence of state transition functions. In *declarative* languages, on the other hand, the denotation lies on the surface of the syntax and is very easy to comprehend. So, what makes a language declarative? cheers Surya Mantha University of Utah and Xerox, Webster Research Center. Path: mum.wrc.xerox.com!mantha Newsgroups: comp.lang.functional Distribution: world Followup-To: From: mantha@mum.wrc.xerox.com (S. Mantha (Visiting Scientist)) Reply-To: mantha.wbst128@xerox.com Organization: Subject: A question on Semantics Keywords: Full Abstraction, Denotational and Operational Semantics This is a question to the semanticists on this newsgroup. What follows is my understanding of the subject (so, it may be all wrong!) In Mathematical Logic, primacy is given to model theory in the following sense. We prove the *soundness* and *completeness* of the axioms and inference rules of the proof theory with respect to the model theory (i.e. notions of validity -- true in all models and so forth). Consider the analogous enterprise in programming language semantics. Here, the idea is to show the *equivalence* between the operational and denotational semantics of a language (the so called full abstraction property). But the direction is reversed. Some kind of a relation (either an equivalence relation or a partial ordering) on program *behaviours* is assumed. The models that we then build are said to be *adequate* and/or *fully abstract*. The burden of *full abstraction* or the stigma of being too *concrete* lies with the models that are constructed to give the denotational semantics. The operational semantics is, in some sense, prior to the denotational semantics. Logic programming is different in this respect. For definite Horn clauses we have the machine independent denotational semantics given by the minimal Herbrand model (let us ignore negation for the moment). The operational semantics is given by SLD-resolution and the usual soundness and completeness results follow. This is in the general tradition of mathematical logic. Why is it not true of other programming languages? Why don't we start with machine independent semantics and then show that the corresponding operational models are sound and complete with respect to them? A colleague responded in the following way: In computing, you can't question the physics. If your machine tells you "it is so" then it is so. You can't argue with that. And thus the notion that our model theory has to be, at least, *adequate* (the terminology itself suggests the inferior status of semantics). A related question: When is a syntax declarative? Note that we talk about the declarative semantics of Horn clauses (cut-free, first order restriction of Prolog). In this case, the terms "denotational" and "declarative" can be used interchangeably. The denotations of the predicates are given by the least Herbrand model and those of the terms by the Herbrand interpretation (term algebra with free constructors). We, however, do not talk of the declarative semantics of C or Pascal, for instance. It is another matter that these languages can be given denotational semantics. I could think of the following: While the denotational semantics of C, Pascal or assembly is completely machine independent, the *construction* of the denotation itself depends heavily on the operational details of some abstract machine -- and is fact little more than the composition of a sequence of state transition functions. In *declarative* languages, on the other hand, the denotation lies on the surface of the syntax and is very easy to comprehend. So, what makes a language declarative? cheers Surya Mantha University of Utah and Xerox, Webster Research Center. Keywords: