[comp.lang.misc] H.O.Functions + Polymorphism

pase@ogcvax.UUCP (Douglas M. Pase) (07/08/87)

In article <megaron.1796> gudeman@arizona.edu (David Gudeman) writes:
>
>Any language with functions can have higher order functions, and any
>language with the concept of `arguments' can have lazy evaluation of
>them.  So how can these features be called advantages of functional
>programming?

Well, imperative languages do not support these features, and higher order
functions, at least, are a central issue to functional programming.  Because
imperative languages are modeled somewhat after the machines they run on,
having storage locations, addresses and state, such concepts are really
incompatible with them.  Higher order functions are not really blocks of
code which have a starting address.  They are more complex than that.  I
will try to illustrate.

'C' is one of the more flexible imperative languages.  It allows one to
pass the address of functions to other functions.  In some sense, this is
a higher order function.  However, it does not allow the full range of
flexibility that higher order functions are allowed.  For example, one
is not able to directly bind arguments to a function and thereby create
a new function.  Take for example a function that I'll call 'plus'.  (I
leave it to the interested reader to figure out just what this function
does.)  One common feature of languages which directly support higher order
functions is that 'plus' can be partially bound to form a new function --

	let p1 = plus 1;

would define a new function, 'p1', which uses the function 'plus' and gives
it an argument.  If we assume the type of 'plus' is

	plus : int -> (int -> int)

then the type of 'p1' would be

	p1 : int -> int

In other words, "p1 5" is an integer expression, and "p1" is an integer
function.  "plus" could be viewed as a function which takes two integer
arguments and returns an integer result, or as a function which accepts
an integer argument and returns an integer function.  Both views are
compatible.

This view of functions is thoroughly incompatible with that of any imperative
language.

Now, combine this idea with polymorphic types.  A polymorphic type is a
data type which may take a type value of many different types -- that is,
it may be a list sometimes, or an integer, a pair, or list of pairs, etc.
It may not be more than one type at a time.  The easiest way to explain this
is with an example.

	letrec listcount = \ list .	/* the '\' says 'list' is an arg */
		if list = [] then	/* the list is empty */
			0		/* the count is zero */
		else			/* count the remains & add 1 */
			1 + listcount (tail list)
	;

This is a simple function which counts the elements of a list.  Its type is

	listcount : list(*a) -> int

Notice that the list argument is polymorphic -- that is, it can take on any
type.  But lists cannot have elements that are of different types.  A list
may have integer elements, but if it does it cannot have elements of other
types.  Thus I may say

	listcount [1;2;3]
or
	listcount ['a';'b';'c']

even though the arguments have different types (list of integers vs. list
of characters).  Notice that

	listcount [1;'b';(3,4)]

is not legal because the elements of the list all have different types.
This is all to say that polymorphic types can still be strongly typed
(that is, no type errors will go undetected).  I leave it to your imagination
to see how these features might help in the software reusability problem.

I do not yet see the advantages to lazy evaluation except in a few cases
which are academically interesting but of little practical use (all of which
involve an exact representation of an infinite sequence, such as the digits of
an irrational number).  However, lazy evaluation is not supportable in
imperative languages as it is an 'evaluate on demand' strategy for function
parameter evaluation.  All imperative languages evaluate the arguments to
functions before the function is invoked.  I have not studied this particular
feature (lazy evaluation) in any great depth so my understanding may well be
partial.
--
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@Oregon#!lualualray

jef@unisoft.uucp (Jef Poskanzer) (07/08/87)

In the referenced article, pase@ogcvax.UUCP (Douglas M. Pase) wrote:
>In article <megaron.1796> gudeman@arizona.edu (David Gudeman) writes:
>>Any language with functions can have higher order functions, and any
>>language with the concept of `arguments' can have lazy evaluation of
>>them.  So how can these features be called advantages of functional
>>programming?
>
>Well, imperative languages do not support these features...
>
>'C' is one of the more flexible imperative languages....
>...one is not able to directly bind arguments to a function and thereby
>create a new function:
>	let p1 = plus 1;
>This view of functions is thoroughly incompatible with that of any imperative
>language.

int p1(x)
int x;
    {
    return x + 1;
    }

Or if you want to be polymorphic:

#define p1(x) ((x) + 1)


>...lazy evaluation is not supportable in
>imperative languages as it is an 'evaluate on demand' strategy for function
>parameter evaluation.  All imperative languages evaluate the arguments to
>functions before the function is invoked.

Algol 60 had call-by-name over twenty years ago.  It wasn't useful then
either.


The point about generic functions is valid, and many recent imperative
languages support them.
---
Jef

 Jef Poskanzer  unisoft!jef@ucbvax.Berkeley.Edu  ...ucbvax!unisoft!jef
                "I SAID I LOVE ALL MANKIND *DAMMIT*!!"

dickey@ssc-vax.UUCP (Frederick J Dickey) (07/09/87)

In article <1345@ogcvax.UUCP>, pase@ogcvax.UUCP (Douglas M. Pase) writes:
> In article <megaron.1796> gudeman@arizona.edu (David Gudeman) writes:
> >
> >Any language with functions can have higher order functions, and any
> >language with the concept of `arguments' can have lazy evaluation of
> >them.  So how can these features be called advantages of functional
> >programming?
> 
> I do not yet see the advantages to lazy evaluation except in a few cases
> which are academically interesting but of little practical use (all of which
> involve an exact representation of an infinite sequence, such as the digits of
> an irrational number).  However, lazy evaluation is not supportable in
> imperative languages as it is an 'evaluate on demand' strategy for function
> parameter evaluation.  All imperative languages evaluate the arguments to
> functions before the function is invoked.  I have not studied this particular
> feature (lazy evaluation) in any great depth so my understanding may well be
> partial.

Years ago I was interested in programming language semantics, in particular,
"fixed-point" semantics (I was a theory guy). My recollection is that
"lazy evaluation"-type parameter passing strategies are much easier to 
describe formally than the way parameter passing is usually done in imperative 
languages.  This was, to me at least, a counterintuitive result, since 
the "normal" way of passing parameters seems a lot simpler. I am inclined
to regard this clash between theory and practice as a "hint" from theory 
that there is something wrong with the conventional view of parameter passing.
of handling parameter passing seems a lot simpler.

willc@tekchips.TEK.COM (Will Clinger) (07/10/87)

In article <441@unisoft.UUCP> jef@unisoft.UUCP (Jef Poskanzer) writes:
>int p1(x)
>int x;
>    {
>    return x + 1;
>    }
>
>Or if you want to be polymorphic:
>
>#define p1(x) ((x) + 1)

These aren't higher order functions.  In Doug Pase's example, plus was the
higher order function, not p1.  It's hard to explain what plus does to a C
programmer because C can't express it.  Here's a stab at explaining what
plus does using a mix of Pascal and C and illegal syntax:

    function plus (y : number) : (function __ (__ : number): number);
      function p1 (x : number) : number;
        begin
          return x + y
        end;
      begin
        return p1
      end

The key thing to observe is that the function p1 must remember the value of
y that was in effect when it was created, even after the plus function has
returned.  This is why stack-based languages like Pascal and C don't support
higher order functions.

Peace, Will Clinger
Tektronix Computer Research Lab

pase@ogcvax.UUCP (Douglas M. Pase) (07/10/87)

In the referenced article, pase@ogcvax.UUCP (Douglas M. Pase) wrote:
>
>...one is not able to directly bind arguments to a function and thereby
>create a new function:
>	let p1 = plus 1;
>This view of functions is thoroughly incompatible with that of any imperative
>language.

In article <unisoft.441> jef@unisoft.UUCP (Jef Poskanzer) writes:
>
>int p1(x)
>int x;
>    {
>    return x + 1;
>    }

This misses the point completely.  Yes most people who claim to be even
remotely computer literate are able to write a function which has one
parameter and returns the sum of that parameter and one.  Suppose I wanted
you to write a function which accepted an arbitrary function (of one or more
parameters) and bind the number one as its first parameter.  This would be
considered a higher order function, because it is able to return a function
as its result.  In an SML-like language the function would be:

	let hof = \ f . f 1 ;

Many of you will recognize the '\' in the function as a 'lambda'.  The type
of 'hof' is

	hof : (int -> *a) -> *a

In other words, 'hof' is a function which accepts as its argument a function
which accepts an integer as *its* first argument and returns something of
unknown type.  'hof' then returns something of the same type as is returned
by its argument (it's obvious why).

Let's give a few examples to clarify just what this all means.  Suppose I
have the functions 's3' and 'm2' as follows:

	let s3 = \ x . \ y . \ z . x + y + z ;
	let m2 = \ x . \ y . x - y ;

's3' obviously sums three integers and 'm2' subtracts the second from the
first.  Their types are:

	s3 : int -> (int -> (int -> int))
	m2 : int -> (int -> int)

Now, I can use 'hof' to generate new functions as follows:

	let s2 = hof s3 ;	/* s2 : int -> (int -> int)	*/
	let m1 = hof m2 ;	/* m1 : int -> int		*/

's2' accepts two arguments, which it sums and adds '1'.  'm1' accepts one
argument which it subtracts from '1'.  The point of this discussion is not,
as it was previously misperceived, that I can create the functions 's2' and
'm1', but that I can create the function 'hof', from which I can create
*at runtime* the functions 's2' and 'm1'.  It is with the creation of 'hof'
that imperative languages have such difficulty.


>Or if you want to be polymorphic:
>
>#define p1(x) ((x) + 1)

This function, although it may be polymorphic in a limited sense, is certainly
not a higher order function.

I write:

>...lazy evaluation is not supportable in
>imperative languages as it is an 'evaluate on demand' strategy for function
>parameter evaluation.  All imperative languages evaluate the arguments to
>functions before the function is invoked.

Jef Poskanzer writes:

>Algol 60 had call-by-name over twenty years ago.  It wasn't useful then
>either.

Call-by-name has interesting semantics -- which are different from those
of lazy evaluation.  Lazy evaluation allows one to create and partially
evaluate infinite data structures.  An attempt to evaluate an infinite
data structure in an imperative language leads to an infinite loop.  Now
I must add that a programmer can be very clever and avoid the infinite loop,
but that is my point -- in an imperative language the programmer must write
clever programs.  A functional language which supports lazy evaluation will
allow the programmer to define the structure in a way that is more natural
to the problem, and leave the cleverness to the compiler.  That which would
be an infinite loop in an imperative language may have reasonable semantics
in such a functional language.

>The point about generic functions is valid, and many recent imperative
>languages support them.

I have seen some forms of ad-hoc polymorphism supported;  I have yet to see
the full capabilities of polymorphism become available.
--
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@Oregon-Grad.csnet

pdm@daimi.UUCP (Peter Mosses) (07/13/87)

In article <1332@ssc-vax.UUCP> dickey@ssc-vax.UUCP (Frederick J Dickey) writes:
>Years ago I was interested in programming language semantics, in particular,
>"fixed-point" semantics (I was a theory guy). My recollection is that
>"lazy evaluation"-type parameter passing strategies are much easier to 
>describe formally than the way parameter passing is usually done in imperative 
>languages.  This was, to me at least, a counterintuitive result, since 
>the "normal" way of passing parameters seems a lot simpler. I am inclined
>to regard this clash between theory and practice as a "hint" from theory 
>that there is something wrong with the conventional view of parameter passing.

In denotational (fixed point) semantics, the notation conventionally
used for specifying denotations is lambda-notation, which itself has
call-by-name parameter-passing.  This property of lambda-notation can,
in simple cases, be exploited to give a simple semantic description of
call-by-name parameter-passing in programming languages.

However, it is _almost_ as easy to specify that parameters are to be
evaluated in advance (just by making a function "strict").  There is
also an alternative lambda-notation (due to Gordon Plotkin) for
partial functions, where parameter evaluation is done in advance,
allowing a simple description of the same concept (call-by-name is
then specified by putting the parameter in a function abstraction).

In practice, "continuations" are used in denotational semantics to
express order of evaluation .  Then it is equally simple to specify
the above alternatives.  What complicates the semantics of (Algol,
Pascal) "call-by-value" is that it involves the allocation of a local
variable to hold the value of the parameter.  By the way, "call-by-
need" (where the parameter is evaluated _at most_ once, the first time
its value is needed in the body) has an even more complicated semantics.

Semantic description is one thing; program verification is something
else.  Perhaps the simpler substitution properties of "call-by-name"
(compared to "call-by-value") might make it easier to reason about
program equivalence/correctness?  Otherwise, "theory" doesn't seem to
be giving any hints about which parameter-passing mechanisms are right
or wrong.
-- 
Peter Mosses						pdm@daimi.uucp
----------------------------------------------------------------------
[ Computer Science Department * Aarhus University * Aarhus * Denmark ]
----------------------------------------------------------------------

anw@nott-cs.UUCP (07/16/87)

In article <1345@ogcvax.UUCP> pase@ogcvax.UUCP (Douglas M. Pase) writes:
>Well, imperative languages do not support these features,
	(higher-order functions & lazy evaluation)
>							   and higher order
>functions, at least, are a central issue to functional programming.  Because
>imperative languages are modeled somewhat after the machines they run on,
>having storage locations, addresses and state, such concepts are really
>incompatible with them.  [...]
>	 One common feature of languages which directly support higher order
>functions is that 'plus' can be partially bound to form a new function --
	[ example omitted ]
>This view of functions is thoroughly incompatible with that of any imperative
>language.

	The only reason why I can't use in Algol the `plus' function:

		PROC plus = (INT i) PROC (INT) INT:
			(INT j) INT: j+i;

and then declare the `plus 1' function:

		PROC p1 = (INT k) INT: plus (1);

is that the Report defines the scope of the body of `plus' in a way that
makes the returned function too `local' to be identified with `p1'.  A
common (:-)) extension makes all this perfectly possible using partial
parametrisation.  See "Partial Parametrization" by C. H. Lindsey in Algol
Bulletin, AB37, pp24-26, July 1974.  For example,

		PROC plus = (INT i, j) INT: i+j;
		PROC (INT) INT p1 := plus (1, );
		IF p1 (6) /= 7 THEN print (("Oops!", newline)) FI;

Lindsey also gives a guide to implementation and the `functional composition'
example (amongst others):

		PROC compose = (PROC (REAL) REAL f, g, REAL x) REAL: f(g(x));
		PROC (REAL) REAL sex = compose (sqrt, exp, );

So HOFs were `old hat' in imperative languages by 1974;  no incompatibilities
there!

>Now, combine this idea with polymorphic types.  [...]

	By sheer coincidence, on pp26-29 of the same Algol Bulletin you will
find discussed examples and the implementation (again in an imperative
context) of modals.  I've pinched the following example:

  PROC sort = (MODE Z, REF [] Z vec, PROC (REF Z, REF Z) BOOL swapped) VOID:
		FOR i FROM UPB vec BY -1 TO LWB vec + 1
		   DO FOR j FROM i TO UPB vec
			WHILE swapped (vec[j-1], vec[j]) DO SKIP OD
		   OD;

Nothing very incompatible there, either.

> [...]			 However, lazy evaluation is not supportable in
>imperative languages as it is an 'evaluate on demand' strategy for function
>parameter evaluation.  All imperative languages evaluate the arguments to
>functions before the function is invoked.  [ ... ]

	Not supportable?  It is true that the historically important compilers
of the historically important imperative languages do not support lazy
evaluation;  but then, nearly all of them, from Fortran to C and Ada, have
had goals which include speed, efficiency, and the like.  There is no vital
reason why the semantics of any new imperative language should not encourage
lazy evaluation, or why one should not write a `lazy C' compiler (though
don't expect it to be easy or particularly useful).

							Andy Walker,
							anw@maths.nott.ac.uk

gudeman@arizona.edu (David Gudeman) (07/17/87)

In article <1345@ogcvax.UUCP> pase@ogcvax.UUCP (Douglas M. Pase) writes:

   In article <megaron.1796> gudeman@arizona.edu (David Gudeman) writes:
   >
   >Any language with functions can have higher order functions, and any
   >language with the concept of `arguments' can have lazy evaluation of
   >them.  So how can these features be called advantages of functional
   >programming?

I left for a week right after I posted this, that's my reply is
late...

   Well, imperative languages do not support these features, ...

Scheme and SL5 are two imperative languages that support higher order
functions, SL5 and Algol are two imperative languages that support
lazy evalution of arguments.  BTW, it may be that you want to call
Scheme a functional language.  You can if you want, but it is also an
imperative language in that it has assignment and control flow.

   Because imperative languages are modeled somewhat after the
   machines they run on, having storage locations, addresses and
   state, such concepts are really incompatible with them. ...

Storage locations, addresses, and state are not unique to RAM
machines, and their appearence in a programming language does not show
that the language is modeled at all on the machine it runs on.
Neither Scheme nor SL5 is modeled on modern computer architectures.

   'C' is one of the more flexible imperative languages.

Why do people always think of C, Algol, Pascal, and Fortran when they
say imperative language?  These are all hardly more than structured
assemblers compared to the really powerful ones like Lisp, Icon,
SNOBOL4, APL, Scheme, SL5 and dozens more.

   ... directly bind arguments to a function and thereby create a new
   function ...<long example of a curried function>...  This view of
   functions is thoroughly incompatible with that of any imperative
   language.

This view of functions is thoroughly compatible with that of a number
of imperative languages.  Maybe the disagreement is a result of your
definition of imperative languages.  If you define imperative
languages as the class of structured assemblers, I see how you might
think there is an incompatibility.  But even so, there isn't any
incompatibility.  It would be trivial to design C-with-currying, and
not too difficult to implement it.

   ...<long description of a limited sort of polymorphic types>...

A lot of imperative languages have polymorphic types also.

   I do not yet see the advantages to lazy evaluation except in a few
   cases which are academically interesting but of little practical
   use (all of which involve an exact representation of an infinite
   sequence, such as the digits of an irrational number).  

Lazy evaluation makes it possible to write functions/procedures that
work on a wider range of input.  For example, 

	function lazy-multiply(x,y)
	if x = 0 then return 0
	else return x * y
	end

cannot be written with strict evaluation.

   However, lazy evaluation is not supportable in imperative languages
   as it is an 'evaluate on demand' strategy for function parameter
   evaluation.  All imperative languages evaluate the arguments to
   functions before the function is invoked.

Again, there are several counter examples to this claim that I've
already named.

					David Gudeman

					    Department of Computer Science
gudeman@arizona.edu			    Gould-Simpson Science Building
{allegra,cmcl2,ihnp4,noao}!arizona!gudeman  The University of Arizona
602-621-2858				    Tucson, AZ 85721