[comp.lang.misc] Polymorphism

nick@lfcs.ed.ac.uk (Nick Rothwell) (01/12/89)

In article <5818@medusa.cs.purdue.edu> rjh@cs.purdue.edu (Bob Hathaway) writes:
>Generics and Milner's type inference system are 
>examples of statically checked (typed) polymorphism.  This form can be
>accomplished at compile time and offers strong type checking.  Dynamic 
>typechecking, which I often refer to as arbitrary polymorphism, is a more
>powerful form and usually cannot be accomplished at compile time.  
                   ^^^^^^^

Well, cannot at all, I'd say, by definition of compile-time and dynamic.

Re: dynamic typechecking being "more powerful" than static typechecking.
More powerful, perhaps, in that you have the option of allowing functions
to determine, during execution, the form (type) of their arguments.
But, a static typechecking system can *guarantee* that a program, if correctly
statically typed, cannot ever give you a type error, no matter what execution
paths are taken. A dynamically typed program doesn't give you this guarantee -
how do you know that some bizarre pattern of inputs won't cause a type error?
Look at a lisp system, with messages like "wrong type: fredp, nil" coming up
in supposedly correct code, to see what I mean.

>Bob Hathaway

		Nick.
--
Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick
~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~
...while the builders of the cages sleep with bullets, bars and stone,
they do not see your road to freedom that you build with flesh and bone.

gudeman@arizona.edu (David Gudeman) (01/14/89)

In article  <1228@etive.ed.ac.uk> nick@lfcs.ed.ac.uk (Nick Rothwell) writes:
>In article <5818@medusa.cs.purdue.edu> rjh@cs.purdue.edu (Bob Hathaway) writes:
>>...  Dynamic 
>>typechecking, which I often refer to as arbitrary polymorphism, is a more
>>powerful form and usually cannot be accomplished at compile time.  
>                   ^^^^^^^
>
>Well, cannot at all, I'd say, by definition of compile-time and dynamic.

There are programs called "type-inference" systems that can infer the
types of most or all variables in a language that has untyped
variables.  I suppose there is a distinction between a dynamically
typed language and a language that simply doesn't require type
declarations.  I believe ML is of the latter class (I hope I'm not
confusing ML with some other language here). It doesn't require type
declarations, but expressions are defined in such a way that the types
of all expressions can be inferred at compile time.

Dynamically typed languages such Lisp or Icon allow types to be
decided at run time.  For example, let read_number be a procedure that
reads a number from a file:

	if read_number() < 0 then return 0
	else return ["a", "list", "of", "strings"]

It is obviously impossible to exactly infer all types in such a
language, but it is possible to make approximations.  And in most
programs, most variables are used in a type-consistent manner, so it
is generally possible to infer the types of 80% or more of all
variable uses.

alan@rnms1.paradyne.com (Alan Lovejoy) (04/21/89)

vax.bbn.com (Mike Thome) writes:
rnms1.paradyne.com (Alan Lovejoy) writes:
>>-- Portability --
>>1) Language portability is a function of how similar the compilers for different
>>systems are and the breadth and depth of STANDARD functions that are provided
>>via STANDARD interfaces.  This is largely an extra-linguistic issue that
>>has more to do with the history and culture of different language communities
>>than it has to do with the technical merits of languages as languages.
>>Smalltalk is the winner here, with C++ a strong runner-up.
>I'm not sure I see how this is *at all* a linguistic issue - it sounds
>as thought you are talking more of programming environment and fidelity
>of implementations than anything else.  Frankly, I'm surprised that C++
>is listed in the same sentence as Smalltalk in this regard - Smalltalk is
>an entire integrated environment implemented in itself... the only other
>similar systems that come close are Lisp Machines (from various vendors)
>running Flavors - perhaps some of the pascal P-machines might be grouped
>in the same way.

1) How is portability a linguistic issue?

Languages that lack certain amenities that users find irresistable tend to
sprout nonstandard extensions in each implementation.  FORTRAN and Pascal
are famous examples.

Then there are those languages whose programmer interface is too "physical"
or perhaps "low level" (which is not necessarily the same thing!), such that
it assumes too much about the architecture of a particular system.  Assembly
languages are good examples, although even "high-level" languages may exhibit
this problem.  It is often masked by the architectural similarities of
modern general purpose systems, however (e.g., byte, halfword, fullword
bit sizes; binary encoding; data type-machine instruction binding).

2) Why does C++ deserve mention alongside ST-80 in regards to portability?

C++ inherits :-) a large supply of STANDARD fuctions and STANDARD interfaces
from C.  To a large extent, C's supply of standard functions arises out of
its association with the UNIX operating system.  While comparing UNIX and
the ST-80 image is not completely kosher, it cannot be denied that both
UNIX and the ST-80 image represent a LOT of code and a LOT of functionality.
And that none of the other languages mentioned have anything like it to 
within an order of mangnitude.

>>-- Object Orientation --
>>The definition of "object oriented language" is open to debate.  Instead of
>>debating the minimum set of attributes that an object-oriented language 
>>should have, it is more instructive to consider the ideal case.  
>>  First, some definitions: A class is a set of values and the functions/relations
>>defined on the members of that set.  An object is an instance of a class which
>>represents one of the values in the set, that is, one of values of the class.
>>  Inheritance is the ability to define a new class in terms of its differences
>>(extensions, deletions, changes) from some other pre-existing class.
>CLOS Generalizes that to "... from some other pre-existing (required for
>instance instantiation) class or classes".
>
>>The ideal object-oriented language would:
>>1) Have user-definable classes.
>CLOS satisfies this requirement - methods may be specified not only on
>defined classes, but also built-in data types (ie. integers vs. floats vs.
>complex vs. ratios ... etc) and even single values.
>
>>2) Have class inheritance.
>CLOS has this.  In addition, it allows inheritance from multiple
>(possibly disjoint) classes.
>
>>3) Allow function/operator overloading.
>CLOS has this.  Any function may be specialized on any one or more of
>its required arguments. (there is no difference between operator and
>function in lisp.  Note that macros and special forms may not be
>specialized).
>
>>4) Be completely polymorphic.
>>Smalltalk and Lisp are highly polymorphic.  Next is Objective-C and Prolog.
>>Then C++, and last is Modula-2.
>I would like to see the definition of "polymorphic" (for my own
>education).

You (and several other people, in email) asked for it!  So here it is,
along with three references:

1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984
   David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow
   Halsted Press: a division of John Wiley & Sons
   -- MUST READING --

2. "Denotational Semantics -- A Methodology For Language Development" -- 1986
   David A. Schmidt, Kansas State University
   Allyn and Bacon, Inc.
   
3. "Abstraction Mechanisms And Language Design" -- 1981
   an ACM Distinguished Dissertation (1982) by Dr. Paul N. Hilfinger, CMU
   The MIT Press
   
-- Definition Of Polymorphism --
   
Full polymorphism means that any component or aspect of a program, program 
component or programming language must behave as a value.  Any and all
values must be subject to inspection, modification, storage in memory,
usage as a component part of some composite value, usage as a paremeter, 
usage as an operand in an expression, being computed as the result of
evaluating an expression or function, and (re)definition of its form, structure,
content or name.  The protocol for invoking these common operations must be 
the same for all values, regardless of type.

To say the same thing more concretely:  numbers, characters, data types,
procedures, variables, addresses, blocks, execution contexts, name bindings, 
scopes, processes and the number of bits in an integer must all be "first 
class objects."  First class objects all share the same set of fundamental
rights, priviliges and abilities, the protocol for the exercise of which
is the same for all objects. A first class object has a value that can be 
inspected, changed or stored in memory.  There are functions and/or relations 
that have been and/or can be defined for any first class object. A first class
object can have its structure defined or redefined. It can be bound to a name,
and its name can be changed.

-- Definition of Abstraction --

Abstraction is the process of discovering and/or expressing what is INVARIANT,
constant and organic in a system by taking out, leaving out or not mentioning 
that which is changing, temporary or accidental.

-- Programming Language Abstraction Mechanisms --

User definable procedures, functions and data types are the most important
(and most common) abstraction mechanisms found in modern programming languges.

Generally, the abstraction mechanisms in a programming language permit one to
construct GENERAL PURPOSE structures, devices or tools that are DEFINED ONCE BUT
USED MANY TIMES in different contexts.  Let us refer to such entities in a
program as "object abstractions".  Object abstractions serve as TEMPLATES from
which multiple but partially identical program objects can be created.  Each
such "created object" in a program (or set of programs), as a separate INSTANCE
of the same object abstraction, will have all the same properties, attributes,
functions and structures defined for that object abstraction.  The idea is to
build an abstracted MASTER OBJECT, an ORIGNAL MOLD from which multiple actual
instances can be generated like cookies from a cookie cutter.  Ideally, the
programmer will design his object abstractions in such a way that they can even
be reused in totally unrelated programs or systems.  

By using instantiation of object abstractions to generate program objects, the
workings, interfaces and functionality of the program objects are tied to the
object abstraction, so that modifications or fixes need only be done to the
object abstraction; the instantiation process insures that modifications are
automatically propogated to the generated program objects. Instantiation of
object abstractions allows multiple copies of a program object to be created
with little extra effort beyond what is required to make just the first copy,
saving time.

Another type of abstraction mechanism called PARAMETERIZATION is also of
critical importance.  Parameterization is a mechanism which permits the
programmer to define program objects whose full nature, whose complete
composition, structure, operation and/or data is NOT DETERMINED UNTIL THE OBJECT
IS ACTUALLY INSTANTIATED, OR EVEN UNTIL THE PROGRAM IS EXECUTED! For example, a
square is a shape abstraction whose usefulness derives partly from the fact that
it says nothing about the SIZE of square shaped objects.  The concept or
abstraction "square" is INDEPENDENT of size (and of horizontal/vertical
orientation, location, material composition, weight, color, velocity...).  All
actual squares must of course have some actual size, but the ability to form
such abstractions DIVORCED of certain details is incredibly useful.  It is far
better to write a square drawing procedure that will draw a square of any size
than to have to write a different square drawing procedure for each desired size
of square. Such a size-independent procedure is said to be parameterized: it
accepts the size of the square to be drawn as an input "parameter".

-- Polymorphism and Abstraction Mechanisms --

Polymorphism involves two invariants:  everything behaves as a value, and all
values have a common set of abilities and a common protocol for the exercise of
those abilities.  Abstraction depends upon discovering and exploiting
invariants.  If everything is a value, then everything can be used by
abstraction mechanisms.  If the protocol for exercising the fundamental
abilities of all values is the same, then the same abstraction mechanisms can be
applied to values of any type.

What abstractions can be created by abstraction mechanisms depends upon what can
be a value.  The generality of the abstractions depends upon both what can be a
value and the commonness of the protocol for accessing common attributes and/or
behaviors of values.

For example, the usefulness of the list abstraction depends upon what can be
a member of the list (what can be a value), and upon the orthogonality of the
protocol for manipulating values:  each different protocol for loading/storing
values from/to memory (e.g., "list.element := element" versus "list.element :=
element^") requires a different version of the abstraction.

Consider the following function definition:

    function sum(a);
    begin
      s := 0;
      for i := 1 to size(a) do
        s := s + a(i);
      end;
      return s;
    end sum;

The usefulness of this function depends upon what values can be input to it
as parameters, upon what value types functions can return, upon what value
types have the ":=", "+" and "()" operations defined on them and upon what value
types have the "size()" function defined for them.  In a language where the
function invocation and array indexing operators are both "()", then the
parameter to this function might be either an array or a function.  In Pascal
or Modula-2, the parameter would have to be function only, because the 
array indexing operator is "[]."

The point is that polymorphism enhances the power of abstraction mechanisms.
Lack of polymorphism cripples abstraction.

--
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

nick@lfcs.ed.ac.uk (Nick Rothwell) (04/21/89)

I've cut down the Followup distribution of this, to stop it going out to the
entire universe; I think that general discussions of language issues belong
in comp.lang.misc...?

In article <5957@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
>1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984
>   David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow
>   Halsted Press: a division of John Wiley & Sons
>   -- MUST READING --
>

The mathematical basis for polymorphic type systems comes from Robinson's
article on unification in the JACM, from around '67 (sorry, can't remember
the title). From here, there's Milner's work, the LCF system, and of course
(fanfare:) ML.

>-- Definition Of Polymorphism --
>   
>Full polymorphism means that any component or aspect of a program, program 
>component or programming language must behave as a value.  Any and all
>values must be subject to inspection, modification, storage in memory,
>usage as a component part of some composite value, usage as a paremeter, 
>usage as an operand in an expression, being computed as the result of
>evaluating an expression or function, and (re)definition of its form, structure,
>content or name.  The protocol for invoking these common operations must be 
>the same for all values, regardless of type.

People might disagree about the extent of "full" polymorphism. ML is
certainly polymorphic, although you can't "inspect" all values (they may
be abstract, or function objects). "modification" is only allowed on
objects created specially for the purpose: how do you modify a function?
"Storage in memory" - doesn't make sense for higher-order garbage collected
languages. "(re)definition of its form" I don't understand...

>To say the same thing more concretely:  numbers, characters, data types,
>procedures, variables, addresses, blocks, execution contexts, name bindings, 
>scopes, processes and the number of bits in an integer must all be "first 
>class objects."

...although a lot of these things (addresses? blocks?) don't exist in some
languages. And I don't see that you want to allow the user to alter absolutely
everything ("hey, let's change the number of bits in a word from being 32
to being yellow...").
   I presume that this is David Harland's definition of polymorphism...? His
approach is to provide this rich model, where everything goes, with
sophisticated hardware support. But, this is a specific approach, and shouldn't
be taken to imply that polymorphism encapsulates everything in this approach.

>-- Definition of Abstraction --
>
>Abstraction is the process of discovering and/or expressing what is INVARIANT,
>constant and organic in a system by taking out, leaving out or not mentioning 
>that which is changing, temporary or accidental.

Isn't that a rather strange way of putting it? I view abstraction as a process
of distinguishing a system's interface from its implementation.

>The point is that polymorphism enhances the power of abstraction mechanisms.
>Lack of polymorphism cripples abstraction.

Absolutely. In conventional languages, you have to be concerned with the
form data structures take (is it a record? an address of a record? do I
have to garbage collect it? etc. etc.). Polymorphism does away with this,
but the price you pay is that polymorphism requires support (garbage collected
store, special hardware, etc. etc.). A price I'm quite willing to pay,
actually... Now, where's that ML session...?

>Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.

		Nick.
--
Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick
~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~
...while the builders of the cages sleep with bullets, bars and stone,
they do not see your road to freedom that you build with flesh and bone.

jack@cs.glasgow.ac.uk (Jack Campin) (04/22/89)

alan@rnms1.paradyne.com (Alan Lovejoy) wrote:

>>> [the ideal programming language should] Be completely polymorphic.
>>I would like to see the definition of "polymorphic" (for my own education).
  
> You (and several other people, in email) asked for it!  So here it is,
> along with three references:
> 1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984
>    David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow
>    Halsted Press: a division of John Wiley & Sons

> Full polymorphism means that any component or aspect of a program, program 
> component or programming language must behave as a value.  Any and all
> values must be subject to inspection, modification, storage in memory,
> usage as a component part of some composite value, usage as a paremeter, 
> usage as an operand in an expression, being computed as the result of
> evaluating an expression or function, and (re)definition of its form,
> structure, content or name.  The protocol for invoking these common
> operations must be the same for all values, regardless of type.

> To say the same thing more concretely:  numbers, characters, data types,
> procedures, variables, addresses, blocks, execution contexts, name bindings, 
> scopes, processes and the number of bits in an integer must all be "first 
> class objects."

You and Harland may mean that by "polymorphic", but the rest of the world
certainly doesn't.  The mathematically understood senses of polymorphism are
(1) the Hindley type system, as implemented in Milner's ML and most functional
languages since, and (2) Girard's more powerful type system F, reinvented by
Reynolds and not fully implemented in any language I can think of (Fairbairn's
Ponder comes closest; Mitchell and Plotkin's SOL and Barendregt's TALE do it
on paper).  There are umpteen mix-and-match combinations of these with other
ideas.  It may be possible to construct a logical system combining the above
notion of dependent type with polymorphism; but nobody has ANY real idea how
to typecheck programs written in such calculi - a few flaky attempts to
implement some of Martin-Lof's type theories are the nearest anyone's got.
Nobody has the *faintest* idea how to coherently describe type systems in
which types are assignable values, as you ask for above.  Putting all three of
these requirements (Milner or Girard polymorphism, dependent types, assignable
types) together in one formal calculus is presently quite unimaginable, still
less implementing the result in a type-safe programming language.

For another cautionary example, look at Cardelli's attempts to meet a few of
the same criteria in recent years: the result being a series of inconsistent
type systems attempting to provide a type of all types and an algorithm for
typechecking dependent types that nobody has any idea how to prove complete
(or refute, for that matter).  Or look at Burstall and Lampson's Pebble
(dependent types and first-class bindings), still unimplemented ten years or
so after its design.  This does not suggest we can expect any quick answers.

Yes, you might be able to hack together a language that provided all those
motherhood features (though as far as I know Harland's team at Linn haven't
managed to implement any significant fragment of Poly in the five years since
that book came out).  But you wouldn't have a hope of reasoning about programs
written in it, and the appropriate formalism for verifying the compiler would
be chicken sacrifices and moving your mouse in pentagrams over the source code.

[ for references to this stuff, browse through a few recent LNCS's on data
  types and the like ]

Despite this, the machine (REKURSIV/OBJEKT) that Harland designed to implement
these ideas seems to be a fast engine for running object-oriented code on.
-- 
Jack Campin  *  Computing Science Department, Glasgow University, 17 Lilybank
Gardens, Glasgow G12 8QQ, SCOTLAND.    041 339 8855 x6045 wk  041 556 1878 ho
INTERNET: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk    USENET: jack@glasgow.uucp
JANET: jack@uk.ac.glasgow.cs     PLINGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack

norvell@csri.toronto.edu (Theodore Stevens Norvell) (04/23/89)

In article <2841@crete.cs.glasgow.ac.uk> jack@cs.glasgow.ac.uk (Jack Campin)
makes some good points, but wrote:
>
>alan@rnms1.paradyne.com (Alan Lovejoy) wrote:
>> To say the same thing more concretely:  numbers, characters, data types,
>> procedures, variables, addresses, blocks, execution contexts, name bindings, 
>> scopes, processes and the number of bits in an integer must all be "first 
>> class objects."
>
>You and Harland may mean that by "polymorphic", but the rest of the world
>certainly doesn't.  The mathematically understood senses of polymorphism are
>(1) the Hindley type system, as implemented in Milner's ML and most functional
>languages since, and (2) Girard's more powerful type system F, reinvented by
>Reynolds and not fully implemented in any language I can think of [...]
>
Perhaps this is a quibble, but surely you are confusing "polymorphism" with
"polymorphic type inference".  Consider the following quotation describing
polymorphic programming:
	A widely employed style of programming, particularly in structure-
	processing languages which impose no discipline of types (LISP is
	a perfect example), entails defining procedures which work well on
	objects of a wide variety (e.g, on lists of atoms, integers, or
	lists).
Surely Harland's Poly is another perfect example.  In Poly can one not
define procedures which work well on objects of a very wide variety?

The quotation, by the way, is from Milner's seminal paper `A Theory
of Type Polymorphism in Programming', (J. Comp. & Sys. Sci., 17, 1978)
in which he used Hindley's system (without knowing it was Hindley's)
to do strong type checking in a lisp-like language (ML).  The introduction
of this approach did not suddenly make LISP, Prolog, or Smalltalk (or
Poly, had it existed at the time) less polymorphic.  Nor did it make
languages that use other approaches to strong type checking of polymorphic
programs (e.g., C++ and Modula-3) less polymorphic.

Milner also credits the term "polymorphism" to Strachey in 1967 which
predates even Hindley's work (1969).

The term "polymorphic type inference", on the other hand describes
perfectly the work of Hindley, Milner, Girard, Reynolds, McCracken,
Cardelli, Wand, etc.

Theo Norvell
U of Toronto

On the other hand, language is living.  Perhaps I'm just out of date.

alan@rnms1.paradyne.com (Alan Lovejoy) (04/23/89)

In article <1810@etive.ed.ac.uk< nick@lfcs.ed.ac.uk (Nick Rothwell) writes:
<In article <5957@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
<>-- Definition Of Polymorphism --
<>   
<>Full polymorphism means that any component or aspect of a program, program 
<>component or programming language must behave as a value.  Any and all
<>values must be subject to inspection, modification, storage in memory,
<>usage as a component part of some composite value, usage as a paremeter, 
<>usage as an operand in an expression, being computed as the result of
<>evaluating an expression or function, and (re)definition of its form, structure,
<>content or name.  The protocol for invoking these common operations must be 
<>the same for all values, regardless of type.

<People might disagree about the extent of "full" polymorphism. 

Yep, they certainly will.  I gave the most extreme definition I could devise
in English.  I wanted to start a debate.  Looks like it worked.

<ML is
<certainly polymorphic, although you can't "inspect" all values (they may
<be abstract, or function objects). 

By "inspect" I simply mean "read."  If I can code "IF a = b," then I can
"inspect" or "read" "a" and "b".

<"modification" is only allowed on
<objects created specially for the purpose: how do you modify a function?

By "modification" I simply mean "change."  If I can code "INC(i)", so that
the value of "i" is incremented to the next "higher" member of the ordered set 
of values of which "i" is a member,  then I have the ability to "modify" "i".

Also, I really want to require that modification be possible in principle,
if the programmer wants to define any operations that modify values of some
type.

As for "modifying a function," how about recompiling it?  Or changing the
input parameter "type" and/or "range" checking? 

<"Storage in memory" - doesn't make sense for higher-order garbage collected
<languages. 

Would you accept Smalltalk as a higher-order garbage-collected language?
The Smalltalk code:

  "Smalltalk at: #Caveman put: 'Fred'.", 

when evaluated, causes the instantiation of an object of class String which
contains the characters 'Fred'.  These characters would be stored in memory
in any Smalltalk implementation for any machine of which I am aware.

What did you think I meant?

<"(re)definition of its form" I don't understand...

Smalltalk permits the instance variables (fields) defined for the instances
of a class to be removed and/or added to even when instances of a class
are already in existence.  When this happens, the existing instances are
dynically modified to reflect this "redefinition" of form.

To allow the definition of the form of a value is simply to allow its
structure to be defined.

<>To say the same thing more concretely:  numbers, characters, data types,
<>procedures, variables, addresses, blocks, execution contexts, name bindings, 
<>scopes, processes and the number of bits in an integer must all be "first 
<>class objects."
<
<...although a lot of these things (addresses? blocks?) don't exist in some
<languages. And I don't see that you want to allow the user to alter absolutely
<everything ("hey, let's change the number of bits in a word from being 32
<to being yellow...").

I didn't say that absolute, full polymorphism was necessarily a good idea.
I merely gave a definition of the extreme case for pedagogic purposes and
so as to incite discussion.

<   I presume that this is David Harland's definition of polymorphism...? His

He might quibble with it.  It's really mine, strongly influenced by his ideas.

<approach is to provide this rich model, where everything goes, with
<sophisticated hardware support. But, this is a specific approach, and shouldn't
<be taken to imply that polymorphism encapsulates everything in this approach.

I don't think that dealing with the MINIMUM specifications of concepts is
always the most productive approach.  Defining the ideal is sometines
more instructive.  Transitive closure.

<>-- Definition of Abstraction --
<>
<>Abstraction is the process of discovering and/or expressing what is INVARIANT,
<>constant and organic in a system by taking out, leaving out or not mentioning 
<>that which is changing, temporary or accidental.

<Isn't that a rather strange way of putting it? I view abstraction as a process
<of distinguishing a system's interface from its implementation.

That's because the interface is invariant under multiple possible 
implementations.


Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

nick@lfcs.ed.ac.uk (Nick Rothwell) (04/24/89)

In article <5973@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
><ML is
><certainly polymorphic, although you can't "inspect" all values (they may
><be abstract, or function objects). 
>
>By "inspect" I simply mean "read."  If I can code "IF a = b," then I can
>"inspect" or "read" "a" and "b".

I stand by my words. You can't evaluate "a = b" in ML if a and b are functions,
since equality doesn't make sense. If they're abstract types, then any
default interpretation of "=" may be incorrect (e.g. sets represented as
lists).

><"modification" is only allowed on
><objects created specially for the purpose: how do you modify a function?
>
>By "modification" I simply mean "change."  If I can code "INC(i)", so that
>the value of "i" is incremented to the next "higher" member of the ordered set 
>of values of which "i" is a member,  then I have the ability to "modify" "i".

I stand by my words here as well. If I bind X to 3 (in a functional language),
I don't want X to be alterable.

>Also, I really want to require that modification be possible in principle,
>if the programmer wants to define any operations that modify values of some
>type.

But the point of a high-level language is that it provides a firm semantic
framework for the objects that you're dealing with and the operations that
they support...? I don't think that making "any modification possible in
principle" is desirable. Or feasible, for that matter.

>As for "modifying a function," how about recompiling it?  Or changing the
>input parameter "type" and/or "range" checking?

I distinguish "modification" and "re-binding". If I rebind a function "F",
that declares a new "F", which I can then go on and use. Existing code
doesn't see the change (unless it is also recompiled).

><"Storage in memory" - doesn't make sense for higher-order garbage collected
><languages. 
>Would you accept Smalltalk as a higher-order garbage-collected language?
>What did you think I meant?

ML has objects which accept assignment, but no notion of "memory" or "address"
in the classical sense. The behaviour of ML reference objects is defined
semantically.

>Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.

		Nick.

Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick
~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~
               Fais que ton reve soit plus long que la nuit.

alan@rnms1.paradyne.com (Alan Lovejoy) (04/26/89)

In article <1834@etive.ed.ac.uk< nick@lfcs.ed.ac.uk (Nick Rothwell) writes:
<In article <5973@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
<><ML is
<><certainly polymorphic, although you can't "inspect" all values (they may
<><be abstract, or function objects). 
<>
<>By "inspect" I simply mean "read."  If I can code "IF a = b," then I can
<>"inspect" or "read" "a" and "b".
<
<I stand by my words. You can't evaluate "a = b" in ML if a and b are functions,
<since equality doesn't make sense. If they're abstract types, then any
<default interpretation of "=" may be incorrect (e.g. sets represented as
<lists).

I'm sorry, I thought the meaning of my example was clear.  Obviously, I was
wrong.     

Let the example be "a OP b," or if you don't like to distinguish between
operators and functions, "f(a, b)."  My point has nothing whatever to do
with the "=" operator.

It should be pointed out, however, that there is no inherent reason that
function-values cannot or should not be compared for equality.  I know of
at least one widely used language, in which functions are values and can
be assigned to variables, which allows function values to be compared for
equality.

<><"modification" is only allowed on
<><objects created specially for the purpose: how do you modify a function?
<>
<>By "modification" I simply mean "change."  If I can code "INC(i)", so that
<>the value of "i" is incremented to the next "higher" member of the ordered set
<>of values of which "i" is a member,  then I have the ability to "modify" "i".
<
<I stand by my words here as well. If I bind X to 3 (in a functional language),
<I don't want X to be alterable.

The definition I gave for polymorphism does not state that it is only to be
applied to functional languages.  In languages which have modifiable value
types, all types must be modifiable if all values are to be first class 
citizens.  But in a functional language, values are not modifiable, only
"readable" and "replaceable."  So "INC(i)" is done by rebinding "i" to
the value of "add(i, 1)."  Bindings, of course, represent part of the
environment of execution.  They are state.  Rebinding implies a modification
of this state, a change in the environment.  The fact that only bindings are 
modifiable values in functional languages provides such languages with some 
impressive benefits.  It also means that either bindings, or else all other 
values, are not first class citizens in such languages.  Whether the benefits 
are worth the cost is an interesting topic for debate.  I am as yet undecided.

<>Also, I really want to require that modification be possible in principle,
<>if the programmer wants to define any operations that modify values of some
<>type.
<
<But the point of a high-level language is that it provides a firm semantic
<framework for the objects that you're dealing with and the operations that
<they support...? I don't think that making "any modification possible in
<principle" is desirable. Or feasible, for that matter.

I am reminded of Chomsky's distinction between competence and performance.  
Writing the software for the SDI system may not be desirable--or feasible, for 
that matter.  But it should be possible in principle, given the right tools.

The purpose of a high level language is to provide abstraction mechanisms.
It's the job of the programmer to use those abstraction mechanisms to define
the semantics of the abstractions he creates with the abstraction mechanims
provided by the language.  The firmness of the semantic framework is his
responsibility, provided the language's abstraction mechanisms give him
sufficient control over the nature of his abstractions that he can provide
a firm semantic framework for them.

There are two schools of thought in langauge design:  the Bondage&Discipline
School and the Freedom&Liberty School.  The Disciplinarians worry about
preventing misuse of value types and functions.  The Freedom Fighters
worry about arbitrary restrictions on the power of a language.  Happily,
there is a middle ground:  let the programmer define his own restrictions,
to be enforced by the language.  After all, the programmer is the expert
on the scene.  He's writing the program.  He knows the application domain.
Such decisions are rightfully his.  If he does not have the intelligence
to discern what safeguards he should build into his abstractions, he probably
won't be smart enough to appreciate a Bondage&Discipline language.  He'll use
C, or something similar.

<>As for "modifying a function," how about recompiling it?  Or changing the
<>input parameter "type" and/or "range" checking?
<
<I distinguish "modification" and "re-binding". If I rebind a function "F",
<that declares a new "F", which I can then go on and use. Existing code
<doesn't see the change (unless it is also recompiled).

Rebinding is what functional languages do instead of value modification.  So
rephrase my definition so that "rebinding" replaces "modification" in the 
case of functional languages.

<><"Storage in memory" - doesn't make sense for higher-order garbage collected
<><languages. 
<>Would you accept Smalltalk as a higher-order garbage-collected language?
<>What did you think I meant?
<
<ML has objects which accept assignment, but no notion of "memory" or "address"
<in the classical sense. The behaviour of ML reference objects is defined
<semantically.

Again, rephrase the definition using the terms for the things that functional
languages do instead of "storage in memory."  At a more primitive level,
where a functional language program is actually implemented in machine code,
"storage in memory" IS occuring.

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

nick@lfcs.ed.ac.uk (Nick Rothwell) (04/26/89)

In article <6000@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
>It should be pointed out, however, that there is no inherent reason that
>function-values cannot or should not be compared for equality.  I know of
>at least one widely used language, in which functions are values and can
>be assigned to variables, which allows function values to be compared for
>equality.

It depends how you define "equality" on functions. Consider:

		let val F = lambda x. lambda y. x+y
		in
		   (F 1) = (F 2)
		end

Then, "F 1" and "F 2" are both functions. Are they equal? They've
evaluated to different dynamic objects, they have different closures,
and they will give different results. However, they both derive from
the static definition of F. The only sensible definition of equality
is equality of dynamic occurrence; but then, two functions may behave
identically (this is undecidable) but not be "equal".

>The definition I gave for polymorphism does not state that it is only to be
>applied to functional languages.  In languages which have modifiable value
>types, all types must be modifiable if all values are to be first class 
>citizens.  But in a functional language, values are not modifiable, only
>"readable" and "replaceable."  So "INC(i)" is done by rebinding "i" to
>the value of "add(i, 1)."
>Rebinding is what functional languages do instead of value modification.  So
>rephrase my definition so that "rebinding" replaces "modification" in the 
>case of functional languages.

But, consider:

	let i = 1
	in  (let i = 2 in i end, i)
	end;

there's no assignment going on here; just a (local) rebinding of the name.
If it was assignment, the value of the above expression would be (2, 2),
not (2, 1).
Binding is a static properly, and doesn't imply dynamic update-in-place
or anything like that. Functional programs have to model update by using
accumulating parameters and self-recursion.

>At a more primitive level,
>where a functional language program is actually implemented in machine code,
>"storage in memory" IS occuring.

At a very low-level, yes, but this information isn't of any use to the
programmer; after all, garbage collection, migration, etc. is going to
move things around at times which the user doesn't know about. It's best
to regard the runtime system as providing a heap of dynamic objects which
disappear when they aren't being referenced by anything; terms like "address"
and "memory" aren't of much use outside the runtime system.

>Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.

		Nick.
Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick
~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~
               Fais que ton reve soit plus long que la nuit.

alan@oz.nm.paradyne.com (Alan Lovejoy) (04/29/89)

In article <1859@etive.ed.ac.uk< nick@lfcs.ed.ac.uk (Nick Rothwell) writes:
<In article <6000@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
<>It should be pointed out, however, that there is no inherent reason that
<>function-values cannot or should not be compared for equality.  

<It depends how you define "equality" on functions. Consider:

<		let val F = lambda x. lambda y. x+y
<		in
<		   (F 1) = (F 2)
<		end

<Then, "F 1" and "F 2" are both functions. Are they equal? They've
<evaluated to different dynamic objects, they have different closures,
<and they will give different results. However, they both derive from
<the static definition of F. The only sensible definition of equality
<is equality of dynamic occurrence; but then, two functions may behave
<identically (this is undecidable) but not be "equal".

(F 1) and (F 2) are different functions, both logically and physically.

However, F = F should evaluate to true.

<>Rebinding is what functional languages do instead of value modification.  So
<>rephrase my definition so that "rebinding" replaces "modification" in the 
<>case of functional languages.

<But, consider:
<
<	let i = 1
<	in  (let i = 2 in i end, i)
<	end;

<there's no assignment going on here; just a (local) rebinding of the name.
<If it was assignment, the value of the above expression would be (2, 2),
<not (2, 1).
<Binding is a static properly, and doesn't imply dynamic update-in-place
<or anything like that. Functional programs have to model update by using
<accumulating parameters and self-recursion.

Of course.  But there is a data structure (not "i") that IS being dynamically
updated in place in your example: the dictionary that maps names to values.
The "let" command has an implied operand: the "environment," which includes
the name-value dictionary.  The "end" command restors the previous binding.
So the dictionary permits the "pushing" and "popping" of bindings. So not
only do we have a dynamically modifiable dictionary, but a stack as well.
Whether this updating actually occurs at run time or compile time is a feature
of the implementation, not a property of the language definition.

<Nick Rothwell,	Laboratory for Foundations of Computer Science, Edinburgh.
<		nick@lfcs.ed.ac.uk    <Atlantic Ocean>!mcvax!ukc!lfcs!nick

db@lfcs.ed.ac.uk (Dave Berry) (05/02/89)

><In article <6000@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
><>It should be pointed out, however, that there is no inherent reason that
><>function-values cannot or should not be compared for equality.  

That depends on what you mean by equality.  If you are happy for the
expression "fn x => 1  =  fn x => 1" to return false, then you can define
equality by comparing the address of function in the computer.  Logically,
however, this isn't the definition of equality that one usually wants.

In article <6032@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:
>In article <1859@etive.ed.ac.uk< nick@lfcs.ed.ac.uk (Nick Rothwell) writes:
>
><	let i = 1
><	in  (let i = 2 in i end, i)
><	end;
>
><there's no assignment going on here; just a (local) rebinding of the name.
><If it was assignment, the value of the above expression would be (2, 2),
><not (2, 1).
>
>Of course.  But there is a data structure (not "i") that IS being dynamically
>updated in place in your example: the dictionary that maps names to values.

That depends on the implementation, doesn't it?  It's quite possible to
implement a symbol table functionally, right down to special hardware if
you want to take it that far.

>Whether this updating actually occurs at run time or compile time is a feature
>of the implementation, not a property of the language definition.

It's nothing to do with run-time.  The symbol table is in the compiler.
Furthermore, your argument that something occurring in an implementation
justifies a statement about the language being implemented seems completely
bogus to me.  Statements about a language should be judged by the semantics
of that language, not by the way some people might choose to implement it
on some hardware.

 "See, this is why I got off drugs.  Who can tell the difference anymore?"
Dave Berry,	Laboratory for Foundations of Computer Science, Edinburgh.
		db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk
		<Atlantic Ocean>!mcvax!ukc!lfcs!db

alan@oz.nm.paradyne.com (Alan Lovejoy) (05/03/89)

In article <1899@etive.ed.ac.uk< db@lfcs.ed.ac.uk (Dave Berry) writes:
<><In article <6000@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes:
<>[I believe Nick Rothwell writes:]
<><there's no assignment going on here; just a (local) rebinding of the name.
<><If it was assignment, the value of the above expression would be (2, 2),
<><not (2, 1).

<>Of course.  But there is a data structure (not "i") that IS being dynamically
<>updated in place in your example: the dictionary that maps names to values.

<That depends on the implementation, doesn't it?  It's quite possible to
<implement a symbol table functionally, right down to special hardware if
<you want to take it that far.

<>Whether this updating actually occurs at run time or compile time is a feature
<>of the implementation, not a property of the language definition.

<It's nothing to do with run-time.  The symbol table is in the compiler.
<Furthermore, your argument that something occurring in an implementation
<justifies a statement about the language being implemented seems completely
<bogus to me.  Statements about a language should be judged by the semantics
<of that language, not by the way some people might choose to implement it
<on some hardware.

Unless there has been a humongous breakthrough recently that I have not heard
about, all computers are mathematically equivalent to Turing Machines.  I assertthat functional languages are Turing Machine equivalent.  Turing machines
maintain state.  Computation in a Turing machine proceeds by alpha transforms,
which change the state of the machine. 

The fact that two "different" implementations of a function are possible
says something about the relationship between those two implementations.
The fact that electrons can be implemented as either waves or particles
says something about the meaningfulness of the distinction between waves
and particles, at least at quantum energy scales.  The implication is that
at some deeper level of abstraction, the distinction is meaningless.

The semantics of a language IS ULTIMATELY DEFINED BY the output of the 
canonical interpreter of the language.  Defining virtual machines on
paper makes it easy to forget all the work being done in the brain of
the person "interpreting" the program as specified by the paper definition.
You cannot rightfully divorce "interpreter activity" from "program activity."
What one language does in the interpreter another does in user defined code.
It is possible to write a program that includes its own intepreter.

The prosecution rests its case.



Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

' Sabatella) (05/03/89)

> [ stuff about binding symbols to definitions ]
>
>>Whether this updating actually occurs at run time or compile time is a feature
>>of the implementation, not a property of the language definition.
>
>It's nothing to do with run-time.  The symbol table is in the compiler.

Obviously neither of you have seen Forth :-)

db@lfcs.ed.ac.uk (Dave Berry) (05/06/89)

In article <6063@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:
>
>I assert that functional languages are Turing Machine equivalent.
>Turing machines maintain state.
>
>The fact that two "different" implementations of a function are possible
>says something about the relationship between those two implementations.
>The implication is that at some deeper level of abstraction, the distinction
>is meaningless.

Are you saying that a functional language can be viewed as updating something
somewhere because this is one possible model for how it could be implemented
and the user need not know exactly how it is implemented in any particular
instance?

>The semantics of a language IS ULTIMATELY DEFINED BY the output of the 
>canonical interpreter of the language.  Defining virtual machines on
>paper makes it easy to forget all the work being done in the brain of
>the person "interpreting" the program as specified by the paper definition.

Not all semantic formalisms use the concept of a virtual machine.  Are you
using an argument similar to the one above, that because they probably could
be interpreted mechanically one may view them as defining a virtual machine?

>You cannot rightfully divorce "interpreter activity" from "program activity."

This is what you are trying to show (I think).  How does it follow from the
above?

>What one language does in the interpreter another does in user defined code.

So?  I don't understand the relevance of this.

>It is possible to write a program that includes its own interpreter.

But does this define the semantics of the language?  For example, I've seen
Lisp interpreters written in Lisp that don't specify the order of evaluation.
This knowledge has to be added from outside.

Again, I don't see the relevance of this.


It seems to me that I could use your argument to show that the archetypal
toy imperative language used in semantics textbooks doesn't have the idea
of state, because I can implement it in the lambda calculus.

		"You never smile, you know it wouldn't look right.	
 		 'Cause your dentures glow in ultra-violet light..." 
Dave Berry, Laboratory for Foundations      db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk
    of Computer Science, Edinburgh Uni.	    <Atlantic Ocean>!mcvax!ukc!lfcs!db

db@lfcs.ed.ac.uk (Dave Berry) (05/06/89)

In article <6057@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:
>>My definition of a polymorphic function is one that can be applied
>>to objects of more than one type.
>
>This is the result of type polymorphism when applied to functions.
>An array which can accept values of any type as elements is the result
>of type polymorphism when applied to data structures.

True.  I usually think of data constructors as functions too.  This comes
of working on the ML project.

>An abstraction mechanism which can be used to construct array types of any
>index and/or element type is the result of type polymorphism when applied to
>metaclasses (a metaclass is an abstraction mechanism which constructs types).

That's an interesting view.  The approach taken round here is to use the
idea of dependent function types from type theory (as in Martin-Lo"f
type theory, Calculus of Constructions, etc.)

It still seems to me that the question of treating everything as a value
is orthogonal to the question of what is polymorphism.  (Obviously
the combination makes languages more expressive.)

>My definition of polymorphism had TWO essential components: 
>
>1) Value universality
>2) Protocol universality
>
>Overloading is a mechanism for accomplishing protocol universality.

Is it the only such mechanism that you were thinking of, or were you
also thinking of the mechanism that I call polymorphism?

>Overloading does not make a function polymorphic.  It may enable the
>definition of polymorphic abstractions and/or abstraction mechanisms, however.
>A function which, among other things, invokes the "Print" function on its
>argument will be more polymorphic if "Print" has been overloaded for many
>value types.

Providing that the type system can specify the type of the function
(if you're using a type system).

		"You never smile, you know it wouldn't look right.	
 		 'Cause your dentures glow in ultra-violet light..." 
Dave Berry, Laboratory for Foundations      db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk
    of Computer Science, Edinburgh Uni.	    <Atlantic Ocean>!mcvax!ukc!lfcs!db

alan@oz.nm.paradyne.com (Alan Lovejoy) (05/10/89)

In article <1943@etive.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes:
>In article <6057@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:
>>An abstraction mechanism which can be used to construct array types of any
>>index and/or element type is the result of type polymorphism when applied to
>>metaclasses (a metaclass is an abstraction mechanism which constructs types).
>
>That's an interesting view.  The approach taken round here is to use the
>idea of dependent function types from type theory (as in Martin-Lo"f
>type theory, Calculus of Constructions, etc.)
>
>It still seems to me that the question of treating everything as a value
>is orthogonal to the question of what is polymorphism.  (Obviously
>the combination makes languages more expressive.)

See below.

>>My definition of polymorphism had TWO essential components: 
>>
>>1) Value universality
>>2) Protocol universality
>>
>>Overloading is a mechanism for accomplishing protocol universality.
>
>Is it the only such mechanism that you were thinking of, or were you
>also thinking of the mechanism that I call polymorphism?

I thought polymorphism was a property, not a mechanism.  Could you  explain
how your conceptualization of polymorphism qualifies as a mechanism?

If I say that language X permits functions to accept and produce values
of any type, would you say that that language is highly polymorphic?

What if langauge X only has two value types:  integer and real?

For theoretical reasons, it is productive to view all actions that occur in the
compilation and/or execution of a program as the result of the evaluation of 
functions. Hence, the definition of new type should be viewed as a service
provided by a compile-time function ("compile-time" in most languages, anyway).
The range of definable data types in a language is obviously a function of the
polymorphism of the type-creation function.  Similarly, the production of values
of a type should be viewed as a service provided by an instantiation function.
Obviously, the polymorphism of this instantiating function determines what
can be a value in a language.  This is how my definition of polymorphism
can be derived from the definition which you are used to.

Values are instances of a type (objects are instances of a class).  The 
requirement for protocol universality is related to the requirement for
value universality by this fact, because the existence of the ability,
and the protocol, to instantiate integers motivates the existence of the
ability to instantiate other types USING THE SAME INSTANTIATION PROTOCOL.

By the same reasoning, the ability to define the integer type motivates the 
ability to define other types USING THE SAME TYPE DEFINITION PROTOCOL.  In
other words, if types are the result of functions which evaluate to types,
then polymorphism demands that the type definition function be able to
produce multiple types.  Given multiple types, polymorphism demands that
the value instantiation function accept many types as an input parameter.

From this perspective, functions, even notional ones, that are available to
the compiler but not to the programmer, impair polymorphism.  Denial of the
services of a function is the same thing as prohibiting the use of a type
(or its instances).  FUNCTIONS ARE VALUES, TOO.  Second class functions
are second class values.  Ask not only what types are in the domain of 
your functions, but also what functions are defined on your types.

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

alan@oz.nm.paradyne.com (Alan Lovejoy) (05/11/89)

In article <1941@etive.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes:
>In article <6063@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:

>>I assert that functional languages are Turing Machine equivalent.
>>Turing machines maintain state.

>>The fact that two "different" implementations of a function are possible
>>says something about the relationship between those two implementations.
>>The implication is that at some deeper level of abstraction, the distinction
>>is meaningless.

>Are you saying that a functional language can be viewed as updating something
>somewhere because this is one possible model for how it could be implemented
>and the user need not know exactly how it is implemented in any particular
>instance?

Not quite.  Turing machines are the ONLY possible model I know of for how a 
functional language could be implemented--other than in a biobrain.  Are nuerons
Turing machines?

This (functional languages must be implemented using Turing machines) is one of
those things that cannot be proven--only disproven.  Science is like that, 
though.

I'm awaiting the disproof...

>>The semantics of a language IS ULTIMATELY DEFINED BY the output of the 
>>canonical interpreter of the language.  Defining virtual machines on
>>paper makes it easy to forget all the work being done in the brain of
>>the person "interpreting" the program as specified by the paper definition.

>Not all semantic formalisms use the concept of a virtual machine.  Are you
>using an argument similar to the one above, that because they probably could
>be interpreted mechanically one may view them as defining a virtual machine?

"Virtual machine" was not the best term.  "Axiomatic system" would have been
better.  I am making an analogy between axioms and virtual machine primitives.
This is a stronger statement than simply noticing that axiomatic systems
might be implemented via [virtual] machines.

The difference between an axiom and a machine operation is that axioms are
STATEMENTS of what is BELEIVED to be true, but machine operations are COMMANDS 
that MAKE something true.  Axioms DEFINE truth, machine operations ESTABLISH it.
Axioms are used to construct proofs of theorems.  Machine operations are used
to implement functions.

A function to sum the value of the numbers in a list is analogous to the
theorem that the result of evaluating the function is the sum of the members
of the list.  The machine operations of which the function is composed
are analogous to the proof.

>>You cannot rightfully divorce "interpreter activity" from "program activity."

>This is what you are trying to show (I think).  How does it follow from the
>above?

It doesn't.  It follows from the below:

>>What one language does in the interpreter another does in user defined code.

>So?  I don't understand the relevance of this.

The point is that there is nothing inherent in the SEMANTICS of a function
that dictates whether its evaluation is necessarily part of the language
interpretaion process or part of the program execution process.  Not only
that, there is nothing inherent in the semantics of a function that dictates
whether or not the function should be built-in to the language or left to
the programmer as an exercise.  In other words, different languages can
choose completely disjoint sets of "built-in" or "primitive" functions.
A language that left EVERYTHING to the programmer as an exercise would be
the null language which ("permits" | "requires") the programmer to write
his own interpreter/compiler and define his own syntax.

Different implemenations of different languages make different decisions 
about what functions exist, and at what time they are evaluated.  So these
are arbitrary decisions that say nothing fundamental about programming 
languages in general.  The question is not, "When (i.e., compile-time, run-time,
etc.) is the function evaluated?" but "Is the function evaluated at all?"

In Western culture, we conceptualize certain physical dimensions (mass, length,
time, electric current, angle...) as *primitive* dimensions.  All other 
physical dimensions (velocity, energy, electric charge, action, momentum,
power, etc.) we conceptualize as derived dimensions which we define in terms
of the primitive dimensions.  Thus velocity is defined as length divided by
time.  This may come as a surprise to some people, but the decision as to
which dimensions are primitive and which ones are derived is completely 
arbitrary.  

If we accept the traditional set of primitive dimensions, then the energy 
dimension is mass X length^2 / time^2.  Choosing another set of primitive 
dimensions, say mass (electron's rest mass), velociy (speed of light), charge 
(eletron's charge), action (Planck's constant), entropy (Boltzmann's constant) 
and angle (1 radian) the formula for energy becomes mass X velocity^2.  Or as
Einstein wrote it:  E=mc^2 (where "c" is the velocity of light).  The length 
dimension would then be action/(mass X velocity), and time would be 
action/(mass X velocity^2) (or action/energy).

There are two criteria for preferring one set of primitive dimensions over
another: the average complexity of the formulae for the derived dimensions
that result from choosing some set of primitive dimensions, and whether 
a dimension has some easily-measured physical constant (such as the speed of
light in the case of velocity).  But these are practical considerations,
not "scientific" ones.

What's the point? I assert that the choice of "primitives" to be provided
by a programming language is like the choice of primitive dimensions:
completely arbitrary. You can change the expression of the definition of
a dimension and/or a data type by choosing a different set of primitives,
but you haven't changed what it actually *IS*.

>It seems to me that I could use your argument to show that the archetypal
>toy imperative language used in semantics textbooks doesn't have the idea
>of state, because I can implement it in the lambda calculus.

The true nature of a thing is best comprehended as the UNION of its possible
definitions, not the INTERSECTION.

Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.

gudeman@arizona.edu (David Gudeman) (05/11/89)

In article  <6090@pdn.paradyne.com> alan@oz.nm.paradyne.com (Alan Lovejoy) writes:
>...  Turing machines are the ONLY possible model I know of for how a
>functional language could be implemented--other than in a biobrain...

I don't understand this comment.  Functional languages can be (and
are) implemented on dozens of different architectures with no
resemblence to Turing machines.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@arizona.edu
Tucson, AZ 85721                 {allegra,cmcl2,ihnp4,noao}!arizona!gudeman

db@lfcs.ed.ac.uk (Dave Berry) (05/19/89)

In article <6090@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes:
>In article <1941@etive.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes:
>You cannot rightfully divorce "interpreter activity" from "program activity."
>The point is that there is nothing inherent in the SEMANTICS of a function
>that dictates whether its evaluation is necessarily part of the language
>interpretation process or part of the program execution process.

I don't see how this is relevant to the discussion of the semantics of
a language, as opposed to a function written in that language.  Yes, if
you use denotational semantics as your formal model you can describe
each construct of the language by a function, but that function is then
a function of the underlying formal system, not of the language being
described.

>In other words, different languages can
>choose completely disjoint sets of "built-in" or "primitive" functions.
>A language that left EVERYTHING to the programmer as an exercise would be
>the null language which ("permits" | "requires") the programmer to write
>his own interpreter/compiler and define his own syntax.

Which would be implementing a different language.  The null language would
not provide assignment or recursion or any other approach.  In fact you
couldn't write an interpreter or compiler using it at all.

>Different implementions of different languages make different decisions 
>about what functions exist, and at what time they are evaluated.

For example, some have assignment, and some don't.

>I assert that the choice of "primitives" to be provided
>by a programming language is like the choice of primitive dimensions:
>completely arbitrary. You can change the expression of the definition of
>a dimension and/or a data type by choosing a different set of primitives,
>but you haven't changed what it actually *IS*.

Again, this seems to be arguing that you can implement a function in
several different languages.  This doesn't show that the languages are the
same, or that they should be considered to be the same (except in the
rather boring aspect of equal expressive power).

The difference between physics primitives and programming language primitives
is that programming languages are hierarchic; if you implement a language L'
in terms of an existing language L you are not extending L, you are using
it as an implementation tool.

>>It seems to me that I could use your argument to show that the archetypal
>>toy imperative language used in semantics textbooks doesn't have the idea
>>of state, because I can implement it in the lambda calculus.
>
>The true nature of a thing is best comprehended as the UNION of its possible
>definitions, not the INTERSECTION.

This seems more like an argument for my side, in that a comparison of two
languages should include the differences between them as well as a statement
of their expressive power (i.e. that they can all be modelled by a universal
Turing machine).

>This (functional languages must be implemented using Turing machines) is one of
>of those things that cannot be proven--only disproven.

Any model capable of implementing functional languages must be as powerful
as a Turing machine, but it need not necessarily be a Turing machine.

Dave Berry, Laboratory for Foundations      db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk
    of Computer Science, Edinburgh Uni.	    <Atlantic Ocean>!mcvax!ukc!lfcs!db

 "It's a cute castle, but why did they build it right next to the railway?"