[comp.lang.functional] Intermediate Codes for Functional Languages

winfabi@dutrun.UUCP (Hans Toetenel) (11/30/90)

This message doesn't come from me, please reply by mail to:
kees@dutiaa.tudelft.nl

Hello,

I would like to know more about intermediate codes for functional languages.
I know about the FLIC-article by Peyton Jones in Sigplan Notices
but I would like to know more about whether such intermediate codes
have been described in the literature and/or used in practice.

Sorry if this has been asked before, but I am relatively new to this group.
I will try to summarize to the net if sufficient response is received.

Thank you in advance,
Kees Pronk.

D.Parrott@cs.ucl.ac.uk (12/04/90)

FLIC is probably the most universal of the intermediate codes because
it was designed to be.  The following paper is the definitive
document:

%A S.L. Peyton-Jones
%A M.S. Joy
%T FLIC - A Functional Language Intermediate Code
%R UCL Internal Note 2048
%O Also Warwick University Research Report 148 (as part of the
   Functional Language Implementation Project, FLIP) and Glasgow
   University Departmental Report
%D August 1989


You also might like to consider:

%A Zena Ariola
%A Arvind
%T P-TAC: A Parallel Intermediate Language 
%I ACM
%J Functional Programming Languages and Computer Architecture
   Conference Proceedings
%P 230--242
%D September 1989

which deals with an intermediate code for parallel, three address
representations of functional programs, and:

%A J.R.W. Gluert
%A J.R. Kennaway
%A M.R. Sleep
%T DACTL: A computational model and compiler target
   	  language based on graph reduction
%J ICL Technical Journal
%V 5
%N 3
%D 1987

DACTL has similar aims to FLIC but is not restricted to functional
programming.

We are using FLIC at UCL for various stages of the compiler design
process, it is available as an optional output from the Chalmers Lazy
ML compiler and the Glasgow Haskell compiler (because the latter is
built on top of the former!).

Dave.

nick@cs.edinburgh.ac.uk (Nick Rothwell) (12/06/90)

In article <1316@ucl-cs.uucp>, D.Parrott@cs.ucl.ac.uk writes:
> 
> FLIC is probably the most universal of the intermediate codes because
> it was designed to be.
> ...

> %T DACTL: A computational model and compiler target
>    	  language based on graph reduction
> %J ICL Technical Journal

I would guess (although I don't know for sure) that these schemes are
biased towards lazy, pure functional languages - the mention of
graph reduction above implying this. Compilers for SML (which is neither
lazy nor purely functional) have to use different intermediate languages.
SML/New Jersey and the Edinburgh Kit Compiler use untyped lambda calculus
as an intermediate language - all the language features in ML (including
pattern matching and the modules system) can be compiled down to a pretty
simple intermediate code. In fact, if I dig around, I can probably find the
signature for the Kit Compiler's intermediate form...

    datatype 'a option = NONE | SOME of 'a

    datatype LambdaExp =
        VAR      of lvar		(* lambda variables. *)
      | INTEGER  of int			(* constants... *)
      | STRING   of string
      | REAL     of real
      | FN       of lvar * LambdaExp	(* function-terms. *)
      | FIX      of lvar list * LambdaExp list * LambdaExp
					(* mutual recursive fns. *)
      | APP      of LambdaExp * LambdaExp	(* function application. *)
      | PRIM_APP of prim * LambdaExp	(* primitive function application. *)
      | VECTOR   of LambdaExp list	(* records/tuples. *)
      | SELECT   of int * lvar		(* con/record indexing. *)
      | SWITCH_I of int Switch		(* switch on integers. *)
      | SWITCH_S of string Switch	(* ...strings *)
      | SWITCH_R of real Switch		(* ...reals *)
      | RAISE    of LambdaExp		(* raise exception *)
      | HANDLE   of LambdaExp * LambdaExp	(* exception handling. *)
      | VOID				(* nil, () etc. *)

			(* HANDLE(e1, e2) executes e1; if an exception
			   packet results, e2 is applied to the packet.
			   therefore, e2 must be a FN. *)

    and 'a Switch = SWITCH of {arg: lvar,
			       selections: ('a, LambdaExp) map,
			       wildcard: LambdaExp option
				(* mandatory for REAL or STRING switches. *)
			      }

There's no type information here at all; the compiler works on an abstract
syntax tree which it assumes has typechecked. In fact, the typechecker
has to provide a few hooks for datatype switches and the like.

SML/New Jersey has a second intermediate form, which is essentially 
lambda calculus but without nested applications, which can be used to
build continuation expressions (I'm a little fuzzy on the details).
The are a lot of intermediate passes to do closure analysis, register
allocation, and so on.

-- 
Nick Rothwell,  Laboratory for Foundations of Computer Science, Edinburgh.
                nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcsun!ukc!lfcs!nick
~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~
 "You ain't seen nothing yet. I can take this floor out too, no trouble."

kh@cs.glasgow.ac.uk (Kevin Hammond) (12/07/90)

In article <1316@ucl-cs.uucp> D.Parrott@cs.ucl.ac.uk writes:

>DACTL has similar aims to FLIC but is not restricted to functional
>programming.

That is a  little confusing to  say the least.   Dactl is specifically
aimed at  parallel implementations of declarative  languages.  It is a
language based on the theory of graph-rewriting (graph-rewriting is to
term-rewriting as graph reduction is to string reduction), but with an
explicit  encoding of strategy.   It  is non-deterministic,  and  will
support (almost) all    features  of languages   such  as Standard  ML
(including assignments, and imperative I/O).  Annotations may be used 
to specify such notions as locality etc.

The reference you give is also rather out of date (I think that might even
be to Dactl0 rather than Dactl1 -- the latter is an almost entirely different
language).  Try:

%A J.R.W. Gluert
%A K. Hammond
%A J.R. Kennaway
%A G.A. Papadopoulos
%A M.R. Sleep
%T DACTL: Some Introductory Papers
%R Internal Report SYS-C88-08, University of East Anglia
%D 1987
%Z Language description, implementation overviews.

%A K. Hammond
%T Implementing Functional Languages for parallel machines
%R PhD thesis, University of East Anglia
%D 1988
%Z Implementing SML using Dactl

%A K. Hammond
%T Parallel SML: a functional language and its implementation in Dactl
%I Pitman Press
%D December, 1990
%Z Published version of above -- much more detail on Dactl, with a
   simple introduction and several (largish) examples, including a
   parallel type-inference engine.

%A G.A. Papadopoulos
%R PhD thesis, University of East Anglia
%D 1989
%Z PARLOG/Concurrent Prolog in Dactl, direct implementation of full GHC
   I don't have the title to hand.

Kevin
-- 
This Signature Intentionally Left Blank.	E-mail: kh@cs.glasgow.ac.uk

kh@cs.glasgow.ac.uk (Kevin Hammond) (12/10/90)

In article <3054@skye.cs.ed.ac.uk> nick@lfcs.ed.ac.uk writes:
>In article <1316@ucl-cs.uucp>, D.Parrott@cs.ucl.ac.uk writes:
>
>I would guess (although I don't know for sure) that these schemes are
>biased towards lazy, pure functional languages - the mention of
>graph reduction above implying this. 

True for FLIC, but wrong for Dactl.  Dactl is a *very* general 
graph-rewriting language which supports, but does not require, lazy
evaluation.  See my thesis/book for examples of compiling eager
(and lazy and semi-lazy) SML into Dactl.  We have also used Dactl for
compiling Hope+, Common Lisp, and various logic languages.

Kevin
-- 
This Signature Intentionally Left Blank.	E-mail: kh@cs.glasgow.ac.uk

jk@cs.man.ac.uk (John Kewley ICL) (12/15/90)

In article <7185@vanuata.cs.glasgow.ac.uk> kh@cs.glasgow.ac.uk (Kevin Hammond) writes:
>In article <3054@skye.cs.ed.ac.uk> nick@lfcs.ed.ac.uk writes:
>
>True for FLIC, but wrong for Dactl.  Dactl is a *very* general 
>graph-rewriting language which supports, but does not require, lazy
>evaluation.  See my thesis/book for examples of compiling eager
>(and lazy and semi-lazy) SML into Dactl.  We have also used Dactl for
>compiling Hope+, Common Lisp, and various logic languages.

In fact, a Hope+ -> DACTL compiler has been written which could produce
either lazy or (eager?) DACTL (Is a language with lazy constructors and
strict functions eager, strict, semi-lazy, semi-strict?).
--
        J.K.
 
John M. Kewley, ICL, Wenlock Way, West Gorton, Manchester. M12 5DR
Tel:   (+44) 61 223 1301 X2138  Email: jk@cs.man.ac.uk / jk@nw.stl.stc.co.uk

fs@rex.cs.tulane.edu (Frank Silbermann) (12/17/90)

In article <1975@m1.cs.man.ac.uk> jk@cs.man.ac.uk (John Kewley ICL) writes:
>
>	In fact, a Hope+ -> DACTL compiler has been written
>	which could produce either lazy or (eager?)
>	DACTL (Is a language with lazy constructors
>	and strict functions eager, strict, semi-lazy, semi-strict?).

I understand what it means to say that a _function_ is strict:
it maps the bottom element of the input domain
to the bottom element of the output domain.
I also understand that this property can justify
evaluation orders which otherwise might not always work.

But how can one speak of strict or lazy constructors?
Consider for example, the ordered pair constructor,
which uses elements X1 and X2 of domains D1 and D2,
to create the element <X1,X2> of domain D1xD2.
The standard selector functions will choose
the appropriate piece of <X1,X2> regardless
of whether the other side happens to be the bottom.

Of course, the language designer may choose to deny the programmer
direct access to the standard selector functions,
and instead provide selector functions which,
before selecting an element, verify that both sides
are strictly above bottom.  In any case,
the elements <X1,bot>, <bot,X2> and <bot,bot> are
distinct elements of D1xD2, regardless of whether
any particular language lets the programmer make use of this fact.

So, when we speak of strict constructors, are we not _really_
describing a property of the _selector_ functions, and not the constructor?
Or, is there another version of domain theory I am not aware of?

	Frank Silbermann	fs@cs.tulane.edu
	Tulane University	New Orleans, Louisianna  USA

rama@tone.rice.edu (Ramarao Kanneganti) (12/18/90)

In article <5455@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes:
>
>But how can one speak of strict or lazy constructors?
>Consider for example, the ordered pair constructor,
>which uses elements X1 and X2 of domains D1 and D2,
>to create the element <X1,X2> of domain D1xD2.
>The standard selector functions will choose
>the appropriate piece of <X1,X2> regardless
>of whether the other side happens to be the bottom.

Well, the above constructor you described is not strict. But consider
the following constructor: StrictPair: it takes X1 from D1 and X2 from
D2 and creates an element of D1 * D2. If either of X1 or X2 is bottom
the pair constructed is bottom.

Now the complete definition of D1 * D2:

Let U be the Universal domain and D1 and D2 be the arbitrary
computable subspaces of the Universal domain U characterized  by the
projection maps R1 and R2. Using the cartesian map "pair" : U -> (U ->
U X U) we can build several simple composite spaces, one of them is
the "coalesced product" ( because the product operation described is
strict, it is customary to describe that operation as "strict")

D1 * D2 = { <x,y> | x in D1 and y in D2 and x != bot and y != bot}
	  U { bottom}

The corresponding functions are:

StrictPair: D1 -> (D2 -> D1 * D2) =
	\ x. \y. if  defined(x) and defined(y) pair(x y) else bottom
fst z =  left z
snd z = left z
R_strict : U -> (D1 * D2) = \x. if defined(x) then 
		StrictPair( (R1 o fst (x) (R2 o snd(x))))

>Of course, the language designer may choose to deny the programmer
>direct access to the standard selector functions,
>and instead provide selector functions which,
>before selecting an element, verify that both sides
>are strictly above bottom.  In any case,
>the elements <X1,bot>, <bot,X2> and <bot,bot> are
>distinct elements of D1xD2, regardless of whether
>any particular language lets the programmer make use of this fact.
>
>So, when we speak of strict constructors, are we not _really_
>describing a property of the _selector_ functions, and not the constructor?
>Or, is there another version of domain theory I am not aware of?
>
>	Frank Silbermann	fs@cs.tulane.edu
>	Tulane University	New Orleans, Louisianna  USA


In the above version of the coalesced product we formed there are no
elements such as <X1, bot>.  For an excellent discussion of all the
different semantics of lazy data constructors you can read the
following paper:

[Lazy]  {Cartwright, R. and J. Donahue}.
The semantics of lazy (and industrious) evaluation.  In {\it Proc.
1982 ACM Symposium on Lisp and Functional Programming\/}, 1982,
253--264. Also available as Xerox Parc Technical Report CSL-83-9,
1984.


Regards
Rama Rao (rama@rice.edu)

sanjiva@oravax.UUCP (Sanjiva Prasad) (12/18/90)

In article <5455@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes:
>I understand what it means to say that a _function_ is strict:
>it maps the bottom element of the input domain
>to the bottom element of the output domain.
>I also understand that this property can justify
>evaluation orders which otherwise might not always work.
>
>But how can one speak of strict or lazy constructors?
>Consider for example, the ordered pair constructor,
>which uses elements X1 and X2 of domains D1 and D2,
>to create the element <X1,X2> of domain D1xD2.
>The standard selector functions will choose
>the appropriate piece of <X1,X2> regardless
>of whether the other side happens to be the bottom.
>
>Of course, the language designer may choose to deny the programmer
>direct access to the standard selector functions,
>and instead provide selector functions which,
>before selecting an element, verify that both sides
>are strictly above bottom.  In any case,
>the elements <X1,bot>, <bot,X2> and <bot,bot> are
>distinct elements of D1xD2, regardless of whether
>any particular language lets the programmer make use of this fact.
>
>So, when we speak of strict constructors, are we not _really_
>describing a property of the _selector_ functions, and not the constructor?

Yes, somewhat. A "constructor" is an *uninterpreted* function, which
can be thought of as forming a "record" of appropriate type and arity, that
allows the recovery of components by *implicit* selector functions. The 
notion of "strict constructor" is a slight abuse of terminology, since the
so-called constructor is not wholly uninterpreted. Actually what we are
doing is pretend we are working with the product domains (e.g. D1xD2), 
but are actually working with variants of them (possibly embeddable in the
product domain), where elements such as <X,bottom> are identified with 
<bottom,bottom>, which might be bottom of the product domain (i.e. of D1xD2). 
Therefore, the selector (deconstructor) functions do not let us recover the 
components but only approximations of them. The so-called constructor symbol
is weakly interpreted (under approximation ordering), but otherwise behaves as
a true constructor would (for all non-bottom/ bottomless elements). 

So for example:
true pairing written (_,_) : D1xD2 -> D1xD2, pi_1, pi_2 are true 
                                                           deconstructors.
strict_pairing written <_,_> : D1 x D2 -> strict_pseudo-product(D1,D2)  
                                                   [resembles D1xD2]
first: strict_pseudo-product(D1,D2) -> D1
second: strict_pseudo-product(D1,D2) -> D2

first <X,bottom> = bottom		whereas pi_1 (x,y) = x
first <X,Y> = X  (if Y =/= bottom)
second < bottom, Y> = bottom		whereas pi_2 (x, y) = y
second <X,Y> = Y (if X =/= bottom)

Note that given any a,b in D1, D2: 
      first <a,b> <=_(in D1) a,      second <a,b> <=_(in D2) b.
strict_pseudo-product(D1,D2) is embeddable in D1xD2 but not isomorphic to it.
Other quasi-constructors for pairing are possible -- e.g. first-strict,
second-strict, depending on the kind of projection [an idempotent 
approximation of identity] desired.

>Or, is there another version of domain theory I am not aware of?
No, I don't think so.

Regards,
Sanjiva Prasad
ORA, Ithaca NY 14850		sanjiva@oracorp.com

PS: Frank, did you see my posting re: recursion in languages based on the 
typed lambda calculus?

jl@cs.glasgow.ac.uk (John Launchbury) (12/18/90)

fs@rex.cs.tulane.edu (Frank Silbermann) writes:

>But how can one speak of strict or lazy constructors?
>Consider for example, the ordered pair constructor,
>which uses elements X1 and X2 of domains D1 and D2,
>to create the element <X1,X2> of domain D1xD2.
>The standard selector functions will choose
>the appropriate piece of <X1,X2> regardless
>of whether the other side happens to be the bottom.

>Of course, the language designer may choose to deny the programmer
>direct access to the standard selector functions,
>and instead provide selector functions which,
>before selecting an element, verify that both sides
>are strictly above bottom.  In any case,
>the elements <X1,bot>, <bot,X2> and <bot,bot> are
>distinct elements of D1xD2, regardless of whether
>any particular language lets the programmer make use of this fact.

>So, when we speak of strict constructors, are we not _really_
>describing a property of the _selector_ functions, and not the constructor?
>Or, is there another version of domain theory I am not aware of?

No, no other version of domain theory. It's just the method of
modelling the implementation by domain theory that you don't seem
to be fully aware of. First of all, sequential machine implementations
are inherently strict. In order to achieve laziness, implementations
make use of closures - pieces of code/graph which represent a
potential computation. Closures of values of a type A may be modelled
by values of the type (Lift A) - all the elements of A plus a new bottom.
The implementation will be strict in this new bottom, but not in
the bottom of A (not yet evaluated).

A constructor (a sum injection) may be strict or lazy: the latter is
achieved by making the constructor point to a closure, the former
by making it point to the value (if the process of producing the value
does not terminate, then neither will the construction of the sum).
Domain theoretically, the machine uses strict sum (smash sum) in both
cases, but in the lazy case it is the smash sum of lifted values. By
the standard isomorphism this is the same as separated sum.

Product formation is similar, but there are distinctions. Consider
the construction of a closure for a pair. What actually happens
(in the G-machine, TIM etc.) is that a pair cell is constructed
with a closure to each value. The construction of the pair is
strict in that if the construction of either closure failed to
terminate successfully (e.g. out of heap) then so would the
construction of the pair. Thus pair construction is strict product,
but it is the product of closures of values. This is isomorphic to
a closure of a non-strict product. So long as it is impossible to
get a recursive value using just products, e.g.
	x = (1,x)
then this is a correct implementation (note that the above definition
of x is ruled out by the common HM type system.)

Finally, it is worth noting that exactly the same trick is used to
obtain lazy functions. This uses the isomorphism
	A -> B  ===  (Lift A) => B
where => is the space of strict functions.

fs@caesar.cs.tulane.edu (Frank Silbermann) (12/19/90)

><5455@rex.cs.tulane.edu> fs@rex.cs.tulane.edu (Frank Silbermann) writes:
>>
>>So, when we speak of strict constructors, are we not _really_
>>describing a property of the _selector_ functions, and not the constructor?

<1856@oravax.UUCP> sanjiva@oravax.odyssey.UUCP (Sanjiva Prasad) writes:
>
>	Yes, somewhat. A "constructor" is an *uninterpreted* function, ...
>	The notion of "strict constructor" is a slight abuse of terminology,
>	since the so-called constructor is not wholly uninterpreted.
>	Actually what we are doing is pretend we are
>	working with the product domains (e.g. D1xD2),
>	but are actually working with variants of them
>	(possibly embeddable in the product domain),
>	where elements such as <X,bottom> are identified with <bottom,bottom>,
>	which might be bottom of the product domain (i.e. of D1xD2). 
>	...
>	The so-called constructor symbol is weakly interpreted
>	(under approximation ordering), but otherwise behaves as
>	a true constructor would (for all non-bottom/ bottomless elements). 

It seems like a lot of unnecessary complication.
Instead of associating <bot,3> with <bot,bot>,
it seems simpler to keep these elements are distinct,
but simply not to allow programs to denote elements such as <bot, 3>.

In other words, one may let the syntactic `cons'
denote a strict _function_, f(x,y), which examines
both x and y, and then either returns <x,y>
(if neither x nor y are bottom), or else returns <bot,bot>.
The function `f' is strict, not <_,_>.

Because we only access the ordered pair constructor via a strict function,
elements such as <7,bot> would exist in the domain,
but could not be denoted via the language.

To be sure, this means that our domain is bigger
than strictly (no pun intended) necessary.
However, this is true in any case, since we usually
do not permit definition of every possible
continuous function, either.  (E.g., in many languages
it is impossible to define the parallel boolean `OR',
--- which would return TRUE if either argument is TRUE,
even if the other is bottom).

	Frank Silbermann	fs@cs.tulane.edu
	Tulane University	New Orleans, Louisianna  USA

rama@titan.rice.edu (Ramarao Kanneganti) (12/19/90)

In article <5477@rex.cs.tulane.edu> fs@caesar.cs.tulane.edu (Frank Silbermann) writes:
>To be sure, this means that our domain is bigger
>than strictly (no pun intended) necessary.
>However, this is true in any case, since we usually
>do not permit definition of every possible
>continuous function, either.  (E.g., in many languages
>it is impossible to define the parallel boolean `OR',
>--- which would return TRUE if either argument is TRUE,
>even if the other is bottom).
>
>	Frank Silbermann	fs@cs.tulane.edu
>	Tulane University	New Orleans, Louisianna  USA


Usually, it advantageous to have the domain completely matching the
language we are defining. In the case of coalesced products, it can be
embedded as a projection of the product domain. But, the sequential
functions are not embeddable as a continous retraction of the
continuous function space. Still, people use the continuous model to
denote the meanings, as there is no other model that is fully abstract
with respect to sequential function space.

Regards,
Rama Rao ( rama@rice.edu )

fs@caesar.cs.tulane.edu (Frank Silbermann) (12/20/90)

In article <1990Dec18.174225.27885@rice.edu> rama@titan.rice.edu (Ramarao Kanneganti) writes:

>	Usually, it advantageous to have the domain
>	completely matching the language we are defining.
(i.e. with no elements undenotable via the syntax)

>	In the case of coalesced products,		(i.e. "strict" cons)
>	it can be embedded as a projection of the product domain.
>	But, the sequential functions are not embeddable
>	as a continous retraction of the continuous function space.
>	Still, people use the continuous model to denote the meanings,
>	as there is no other model that is fully abstract
>	with respect to sequential function space.

Aside from a desire to avoid unnecessary complexity,
what are some of the advantages of using a domain
that contains only those elements which can be denoted from the language?
(I ask this because sometimes adding extra elements
can make some aspects simpler -- just as judicious use
of "don't cares" can simplify manufacture of a logic gate.)

Will the advantages still hold, even when other constructors
(i.e function abstraction) _do_ result in extra domain elements?

	Frank Silbermann	fs@cs.tulane.edu
	Tulane University	New Orleans, Louisianna  USA

rama@titan.rice.edu (Ramarao Kanneganti) (12/21/90)

In article <5487@rex.cs.tulane.edu> fs@caesar.cs.tulane.edu (Frank Silbermann) writes:
>
>Aside from a desire to avoid unnecessary complexity,
>what are some of the advantages of using a domain
>that contains only those elements which can be denoted from the language?
>(I ask this because sometimes adding extra elements
>can make some aspects simpler -- just as judicious use
>of "don't cares" can simplify manufacture of a logic gate.)

The disadvantage I am aware of is: The equalities that hold in the
operational world do not hold in the denotational model. That is, if
you are using the denotational model to prove equalities between two
terms, even if they are unequal, you may not be able to conclude that
they are operationally unequal.

Ex: In the standard continuous model for functions in a sequential
language: 

F_i = \f. if (f bot true)
		if (f true bot)
			if (not (f false false))
			then i else bot
Operationally F_1 and F_2 are equal. They always diverge. However,
denotations of F_1 and F_2 are not equal. One of the "extra" elements
in the domain, i.e. parallel-or make F_i converge to i.

A similar example can be manufactured in case of "strict" cons also.

>
>Will the advantages still hold, even when other constructors
>(i.e function abstraction) _do_ result in extra domain elements?
>
>	Frank Silbermann	fs@cs.tulane.edu
>	Tulane University	New Orleans, Louisianna  USA

By this, I assume that through some other constructor, you are able to
create the "extra" domain elements that have been left out in the
domain. In this case, unless you use types to identify the exact
subdomain you are working with, I do not see the same disadvantage.
Eg: Introducing parallel-or as a a member of function type. If you do
not prohibit parallel-or being used as a function, the above example
would not create any problem.

Regards,
Rama

fs@caesar.cs.tulane.edu (Frank Silbermann) (12/21/90)

>>Aside from a desire to avoid unnecessary complexity,
>>what are some of the advantages of using a domain
>>that contains only those elements which can be denoted from the language?
>>(I ask this because sometimes adding extra elements
>>can make some aspects simpler -- just as judicious use
>>of "don't cares" can simplify manufacture of a logic gate.)

In article <1990Dec20.190014.20982@rice.edu> rama@titan.rice.edu
(Ramarao Kanneganti) writes:
>
>	The disadvantage I am aware of is: The equalities that hold in the
>	operational world do not hold in the denotational model. That is, if
>	you are using the denotational model to prove equalities between two
>	terms, even if they are unequal, you may not be able to conclude that
>	they are operationally unequal.
>
>	Ex: In the standard continuous model for functions in a sequential
>	language: 
> 
>	F_i = ...
>			if (f true bot)
>				if (not (f false false))
>				then i else bot
>
>	Operationally F_1 and F_2 are equal. They always diverge. However,
>	denotations of F_1 and F_2 are not equal. One of the "extra" elements
>	in the domain, i.e. parallel-or make F_i converge to i.

It seems more important, for the sake of referential transparency,
that denotational equality should imply operational equality.
(Is this what is meant by "adequacy" of the model?)

However, I don't forsee anybody complaining,
"I compared the execution of two programs on all possible inputs,
and though my denotational semantics says the programs are not the same,
after watching both for all eternity I noticed that
both results are equally undefined on everything I tried!"

The notion of operational equality seems incomplete to me,
because you _cannot_ compare their behavior on all possible inputs
if some of those inputs cannot be expressed in the language.

>	A similar example can be manufactured in case of "strict" cons also.

>>Will the advantages still hold, even when other constructors
>>(i.e function abstraction) _do_ result in extra domain elements?

>	By this, I assume that through some other constructor,
>	you are able to create the "extra" domain elements
>	that have been left out in the domain. In this case,
>	unless you use types to identify the exact
>	subdomain you are working with, I do not see the same disadvantage.
>	Eg: Introducing parallel-or as a a member of function type.
>	If you do not prohibit parallel-or being used as a function,
>	the above example would not create any problem.

Uh, I didn't understand that last part.
Could you please explain it again, more concretely?

My question was,
"Since our language is not going to be fully abstract anyway
(due to lack of provision for defining parallel functions),
what _further_ loss results from replacement of a strict-cons
by a lazy cons accessed from within a strict function
(so that arguments are verified as being above bottom
before application of the lazy cons)?

Frank Silbermann	fs@cs.tulane.edu
Tulane University	New Orleans, Louisianna  USA

rama@tone.rice.edu (Ramarao Kanneganti) (12/21/90)

In article <5504@rex.cs.tulane.edu> fs@caesar.cs.tulane.edu (Frank Silbermann) writes:
>
>It seems more important, for the sake of referential transparency,
>that denotational equality should imply operational equality.
>(Is this what is meant by "adequacy" of the model?)

Yes, this is the "adequacy" of the model which has to hold if the
denotation should make any sense at all.

>
>However, I don't forsee anybody complaining,
>"I compared the execution of two programs on all possible inputs,
>and though my denotational semantics says the programs are not the same,
>after watching both for all eternity I noticed that
>both results are equally undefined on everything I tried!"
>
>The notion of operational equality seems incomplete to me,
>because you _cannot_ compare their behavior on all possible inputs
>if some of those inputs cannot be expressed in the language.

Operational equality is not defined as "equal on all inputs". If two
parts of the programs are observationally not different in all
contexts of the language, then they are operationally equivalent. To
prove this you don't need to run the programs. You can prove things
based on reduction rules etc. Often this is painful, as you have to
use induction on length of the reduction or some such equally
difficult proof method. In fact, when I said that F_1 and F_2 are
equal operationally, I can prove it using the reduction semantics(i.e.
they are equal in all contexts). I find it somewhat difficult to
follow that is one of the reasons why I prefer proofs using
denotational semantics.

What is disturbing is when I prove that two functions are not equal,
some theory based on reduction semantics tells me that they are equal.
Then the full power of denotational semantics is lost.
>
>My question was,
>"Since our language is not going to be fully abstract anyway
>(due to lack of provision for defining parallel functions),
>what _further_ loss results from replacement of a strict-cons
>by a lazy cons accessed from within a strict function
>(so that arguments are verified as being above bottom
>before application of the lazy cons)?
>
>Frank Silbermann	fs@cs.tulane.edu
>Tulane University	New Orleans, Louisianna  USA

I misunderstood your question last time. Sorry about that.

It seems to me that more and more inequalities that hold in the model
won't be true in the language (i.e. there is no further "qualitative
loss"). For me it is a sure sign that either I am modeling the
language wrong, or the language needs to be fixed. I believe that the
continuous model is not suitable for a sequential language.

-- rama

fs@rex.cs.tulane.edu (Frank Silbermann) (12/22/90)

In article <1990Dec20.215506.25243@rice.edu> rama@tone.rice.edu (Ramarao Kanneganti) writes:
>
>	If two parts of the programs are observationally
>	not different in all contexts of the language,
>	then they are operationally equivalent.
>	To prove this you don't need to run the programs.
>	You can prove things based on reduction rules etc.
>	Often this is painful, as you have to use induction
>	on length of the reduction or some such equally difficult proof method.
>	I find it somewhat difficult to follow; that is one of the reasons
>	why I prefer proofs using denotational semantics.

Indeed.  I thought the whole purpose of declarative programming
is that we can reason about programs in the _semantic_ realm.
We want to use mathematics to prove things about our programs,
not vice-versa.

>	What is disturbing is when I prove that two functions
>	are not equal, some theory based on reduction semantics
>	tells me that they are equal.  Then the full power
>	of denotational semantics is lost.

This just says that operational "equivalence" is weaker than true equality
(yet another reason to prove things via the model, not the syntax).

>	It seems to me that more and more inequalities that hold
>	in the model won't be true in the language.

You mean, won't be _observable_ in the language.
The solution is to stop identifying operational equivalence
with equality.

>	For me it is a sure sign that either I am modeling the
>	language wrong, or the language needs to be fixed.
>	I believe that the continuous model is not suitable
>	for a sequential language.  -- rama

I don't think it's as bad as all that.
Even in arithmetic, we model our language via
the _real_ numbers, even though we probably cannot denote
every one of them in our notation.
Why do we put up with this in mathematics
(having models which are richer than the language)?
Probably because sometimes a stronger result
(e.g. a theorem about the model)
is easier to obtain than a weaker result
(e.g. a theorem about the language).

In the functional programming field,
I would like to see fewer papers about lambda calculi,
and more papers about domain models for functional languages,
especially the polymorphicly-typed kind.
(I'd write one, myself, if I could figure out how :-)	).

	Frank Silbermann	fs@cs.tulane.edu
	Tulane University	New Orleans, Louisianna  USA