alan@rnms1.paradyne.com (Alan Lovejoy) (04/21/89)
vax.bbn.com (Mike Thome) writes: rnms1.paradyne.com (Alan Lovejoy) writes: >>-- Portability -- >>1) Language portability is a function of how similar the compilers for different >>systems are and the breadth and depth of STANDARD functions that are provided >>via STANDARD interfaces. This is largely an extra-linguistic issue that >>has more to do with the history and culture of different language communities >>than it has to do with the technical merits of languages as languages. >>Smalltalk is the winner here, with C++ a strong runner-up. >I'm not sure I see how this is *at all* a linguistic issue - it sounds >as thought you are talking more of programming environment and fidelity >of implementations than anything else. Frankly, I'm surprised that C++ >is listed in the same sentence as Smalltalk in this regard - Smalltalk is >an entire integrated environment implemented in itself... the only other >similar systems that come close are Lisp Machines (from various vendors) >running Flavors - perhaps some of the pascal P-machines might be grouped >in the same way. 1) How is portability a linguistic issue? Languages that lack certain amenities that users find irresistable tend to sprout nonstandard extensions in each implementation. FORTRAN and Pascal are famous examples. Then there are those languages whose programmer interface is too "physical" or perhaps "low level" (which is not necessarily the same thing!), such that it assumes too much about the architecture of a particular system. Assembly languages are good examples, although even "high-level" languages may exhibit this problem. It is often masked by the architectural similarities of modern general purpose systems, however (e.g., byte, halfword, fullword bit sizes; binary encoding; data type-machine instruction binding). 2) Why does C++ deserve mention alongside ST-80 in regards to portability? C++ inherits :-) a large supply of STANDARD fuctions and STANDARD interfaces from C. To a large extent, C's supply of standard functions arises out of its association with the UNIX operating system. While comparing UNIX and the ST-80 image is not completely kosher, it cannot be denied that both UNIX and the ST-80 image represent a LOT of code and a LOT of functionality. And that none of the other languages mentioned have anything like it to within an order of mangnitude. >>-- Object Orientation -- >>The definition of "object oriented language" is open to debate. Instead of >>debating the minimum set of attributes that an object-oriented language >>should have, it is more instructive to consider the ideal case. >> First, some definitions: A class is a set of values and the functions/relations >>defined on the members of that set. An object is an instance of a class which >>represents one of the values in the set, that is, one of values of the class. >> Inheritance is the ability to define a new class in terms of its differences >>(extensions, deletions, changes) from some other pre-existing class. >CLOS Generalizes that to "... from some other pre-existing (required for >instance instantiation) class or classes". > >>The ideal object-oriented language would: >>1) Have user-definable classes. >CLOS satisfies this requirement - methods may be specified not only on >defined classes, but also built-in data types (ie. integers vs. floats vs. >complex vs. ratios ... etc) and even single values. > >>2) Have class inheritance. >CLOS has this. In addition, it allows inheritance from multiple >(possibly disjoint) classes. > >>3) Allow function/operator overloading. >CLOS has this. Any function may be specialized on any one or more of >its required arguments. (there is no difference between operator and >function in lisp. Note that macros and special forms may not be >specialized). > >>4) Be completely polymorphic. >>Smalltalk and Lisp are highly polymorphic. Next is Objective-C and Prolog. >>Then C++, and last is Modula-2. >I would like to see the definition of "polymorphic" (for my own >education). You (and several other people, in email) asked for it! So here it is, along with three references: 1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984 David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow Halsted Press: a division of John Wiley & Sons -- MUST READING -- 2. "Denotational Semantics -- A Methodology For Language Development" -- 1986 David A. Schmidt, Kansas State University Allyn and Bacon, Inc. 3. "Abstraction Mechanisms And Language Design" -- 1981 an ACM Distinguished Dissertation (1982) by Dr. Paul N. Hilfinger, CMU The MIT Press -- Definition Of Polymorphism -- Full polymorphism means that any component or aspect of a program, program component or programming language must behave as a value. Any and all values must be subject to inspection, modification, storage in memory, usage as a component part of some composite value, usage as a paremeter, usage as an operand in an expression, being computed as the result of evaluating an expression or function, and (re)definition of its form, structure, content or name. The protocol for invoking these common operations must be the same for all values, regardless of type. To say the same thing more concretely: numbers, characters, data types, procedures, variables, addresses, blocks, execution contexts, name bindings, scopes, processes and the number of bits in an integer must all be "first class objects." First class objects all share the same set of fundamental rights, priviliges and abilities, the protocol for the exercise of which is the same for all objects. A first class object has a value that can be inspected, changed or stored in memory. There are functions and/or relations that have been and/or can be defined for any first class object. A first class object can have its structure defined or redefined. It can be bound to a name, and its name can be changed. -- Definition of Abstraction -- Abstraction is the process of discovering and/or expressing what is INVARIANT, constant and organic in a system by taking out, leaving out or not mentioning that which is changing, temporary or accidental. -- Programming Language Abstraction Mechanisms -- User definable procedures, functions and data types are the most important (and most common) abstraction mechanisms found in modern programming languges. Generally, the abstraction mechanisms in a programming language permit one to construct GENERAL PURPOSE structures, devices or tools that are DEFINED ONCE BUT USED MANY TIMES in different contexts. Let us refer to such entities in a program as "object abstractions". Object abstractions serve as TEMPLATES from which multiple but partially identical program objects can be created. Each such "created object" in a program (or set of programs), as a separate INSTANCE of the same object abstraction, will have all the same properties, attributes, functions and structures defined for that object abstraction. The idea is to build an abstracted MASTER OBJECT, an ORIGNAL MOLD from which multiple actual instances can be generated like cookies from a cookie cutter. Ideally, the programmer will design his object abstractions in such a way that they can even be reused in totally unrelated programs or systems. By using instantiation of object abstractions to generate program objects, the workings, interfaces and functionality of the program objects are tied to the object abstraction, so that modifications or fixes need only be done to the object abstraction; the instantiation process insures that modifications are automatically propogated to the generated program objects. Instantiation of object abstractions allows multiple copies of a program object to be created with little extra effort beyond what is required to make just the first copy, saving time. Another type of abstraction mechanism called PARAMETERIZATION is also of critical importance. Parameterization is a mechanism which permits the programmer to define program objects whose full nature, whose complete composition, structure, operation and/or data is NOT DETERMINED UNTIL THE OBJECT IS ACTUALLY INSTANTIATED, OR EVEN UNTIL THE PROGRAM IS EXECUTED! For example, a square is a shape abstraction whose usefulness derives partly from the fact that it says nothing about the SIZE of square shaped objects. The concept or abstraction "square" is INDEPENDENT of size (and of horizontal/vertical orientation, location, material composition, weight, color, velocity...). All actual squares must of course have some actual size, but the ability to form such abstractions DIVORCED of certain details is incredibly useful. It is far better to write a square drawing procedure that will draw a square of any size than to have to write a different square drawing procedure for each desired size of square. Such a size-independent procedure is said to be parameterized: it accepts the size of the square to be drawn as an input "parameter". -- Polymorphism and Abstraction Mechanisms -- Polymorphism involves two invariants: everything behaves as a value, and all values have a common set of abilities and a common protocol for the exercise of those abilities. Abstraction depends upon discovering and exploiting invariants. If everything is a value, then everything can be used by abstraction mechanisms. If the protocol for exercising the fundamental abilities of all values is the same, then the same abstraction mechanisms can be applied to values of any type. What abstractions can be created by abstraction mechanisms depends upon what can be a value. The generality of the abstractions depends upon both what can be a value and the commonness of the protocol for accessing common attributes and/or behaviors of values. For example, the usefulness of the list abstraction depends upon what can be a member of the list (what can be a value), and upon the orthogonality of the protocol for manipulating values: each different protocol for loading/storing values from/to memory (e.g., "list.element := element" versus "list.element := element^") requires a different version of the abstraction. Consider the following function definition: function sum(a); begin s := 0; for i := 1 to size(a) do s := s + a(i); end; return s; end sum; The usefulness of this function depends upon what values can be input to it as parameters, upon what value types functions can return, upon what value types have the ":=", "+" and "()" operations defined on them and upon what value types have the "size()" function defined for them. In a language where the function invocation and array indexing operators are both "()", then the parameter to this function might be either an array or a function. In Pascal or Modula-2, the parameter would have to be function only, because the array indexing operator is "[]." The point is that polymorphism enhances the power of abstraction mechanisms. Lack of polymorphism cripples abstraction. -- Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL. Disclaimer: I do not speak for AT&T Paradyne. They do not speak for me. _________________________Design Flaws Travel In Herds_________________________ Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.
nick@lfcs.ed.ac.uk (Nick Rothwell) (04/21/89)
I've cut down the Followup distribution of this, to stop it going out to the entire universe; I think that general discussions of language issues belong in comp.lang.misc...? In article <5957@pdn.paradyne.com> alan@rnms1.paradyne.com (Alan Lovejoy) writes: >1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984 > David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow > Halsted Press: a division of John Wiley & Sons > -- MUST READING -- > The mathematical basis for polymorphic type systems comes from Robinson's article on unification in the JACM, from around '67 (sorry, can't remember the title). From here, there's Milner's work, the LCF system, and of course (fanfare:) ML. >-- Definition Of Polymorphism -- > >Full polymorphism means that any component or aspect of a program, program >component or programming language must behave as a value. Any and all >values must be subject to inspection, modification, storage in memory, >usage as a component part of some composite value, usage as a paremeter, >usage as an operand in an expression, being computed as the result of >evaluating an expression or function, and (re)definition of its form, structure, >content or name. The protocol for invoking these common operations must be >the same for all values, regardless of type. People might disagree about the extent of "full" polymorphism. ML is certainly polymorphic, although you can't "inspect" all values (they may be abstract, or function objects). "modification" is only allowed on objects created specially for the purpose: how do you modify a function? "Storage in memory" - doesn't make sense for higher-order garbage collected languages. "(re)definition of its form" I don't understand... >To say the same thing more concretely: numbers, characters, data types, >procedures, variables, addresses, blocks, execution contexts, name bindings, >scopes, processes and the number of bits in an integer must all be "first >class objects." ...although a lot of these things (addresses? blocks?) don't exist in some languages. And I don't see that you want to allow the user to alter absolutely everything ("hey, let's change the number of bits in a word from being 32 to being yellow..."). I presume that this is David Harland's definition of polymorphism...? His approach is to provide this rich model, where everything goes, with sophisticated hardware support. But, this is a specific approach, and shouldn't be taken to imply that polymorphism encapsulates everything in this approach. >-- Definition of Abstraction -- > >Abstraction is the process of discovering and/or expressing what is INVARIANT, >constant and organic in a system by taking out, leaving out or not mentioning >that which is changing, temporary or accidental. Isn't that a rather strange way of putting it? I view abstraction as a process of distinguishing a system's interface from its implementation. >The point is that polymorphism enhances the power of abstraction mechanisms. >Lack of polymorphism cripples abstraction. Absolutely. In conventional languages, you have to be concerned with the form data structures take (is it a record? an address of a record? do I have to garbage collect it? etc. etc.). Polymorphism does away with this, but the price you pay is that polymorphism requires support (garbage collected store, special hardware, etc. etc.). A price I'm quite willing to pay, actually... Now, where's that ML session...? >Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL. Nick. -- Nick Rothwell, Laboratory for Foundations of Computer Science, Edinburgh. nick@lfcs.ed.ac.uk <Atlantic Ocean>!mcvax!ukc!lfcs!nick ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ...while the builders of the cages sleep with bullets, bars and stone, they do not see your road to freedom that you build with flesh and bone.
jack@cs.glasgow.ac.uk (Jack Campin) (04/22/89)
alan@rnms1.paradyne.com (Alan Lovejoy) wrote: >>> [the ideal programming language should] Be completely polymorphic. >>I would like to see the definition of "polymorphic" (for my own education). > You (and several other people, in email) asked for it! So here it is, > along with three references: > 1. "Polymorphic Programming Languages -- Design and Implementation" -- 1984 > David M. Harland, B.Sc., M.Sc., Ph.D., University Of Glasgow > Halsted Press: a division of John Wiley & Sons > Full polymorphism means that any component or aspect of a program, program > component or programming language must behave as a value. Any and all > values must be subject to inspection, modification, storage in memory, > usage as a component part of some composite value, usage as a paremeter, > usage as an operand in an expression, being computed as the result of > evaluating an expression or function, and (re)definition of its form, > structure, content or name. The protocol for invoking these common > operations must be the same for all values, regardless of type. > To say the same thing more concretely: numbers, characters, data types, > procedures, variables, addresses, blocks, execution contexts, name bindings, > scopes, processes and the number of bits in an integer must all be "first > class objects." You and Harland may mean that by "polymorphic", but the rest of the world certainly doesn't. The mathematically understood senses of polymorphism are (1) the Hindley type system, as implemented in Milner's ML and most functional languages since, and (2) Girard's more powerful type system F, reinvented by Reynolds and not fully implemented in any language I can think of (Fairbairn's Ponder comes closest; Mitchell and Plotkin's SOL and Barendregt's TALE do it on paper). There are umpteen mix-and-match combinations of these with other ideas. It may be possible to construct a logical system combining the above notion of dependent type with polymorphism; but nobody has ANY real idea how to typecheck programs written in such calculi - a few flaky attempts to implement some of Martin-Lof's type theories are the nearest anyone's got. Nobody has the *faintest* idea how to coherently describe type systems in which types are assignable values, as you ask for above. Putting all three of these requirements (Milner or Girard polymorphism, dependent types, assignable types) together in one formal calculus is presently quite unimaginable, still less implementing the result in a type-safe programming language. For another cautionary example, look at Cardelli's attempts to meet a few of the same criteria in recent years: the result being a series of inconsistent type systems attempting to provide a type of all types and an algorithm for typechecking dependent types that nobody has any idea how to prove complete (or refute, for that matter). Or look at Burstall and Lampson's Pebble (dependent types and first-class bindings), still unimplemented ten years or so after its design. This does not suggest we can expect any quick answers. Yes, you might be able to hack together a language that provided all those motherhood features (though as far as I know Harland's team at Linn haven't managed to implement any significant fragment of Poly in the five years since that book came out). But you wouldn't have a hope of reasoning about programs written in it, and the appropriate formalism for verifying the compiler would be chicken sacrifices and moving your mouse in pentagrams over the source code. [ for references to this stuff, browse through a few recent LNCS's on data types and the like ] Despite this, the machine (REKURSIV/OBJEKT) that Harland designed to implement these ideas seems to be a fast engine for running object-oriented code on. -- Jack Campin * Computing Science Department, Glasgow University, 17 Lilybank Gardens, Glasgow G12 8QQ, SCOTLAND. 041 339 8855 x6045 wk 041 556 1878 ho INTERNET: jack%cs.glasgow.ac.uk@nss.cs.ucl.ac.uk USENET: jack@glasgow.uucp JANET: jack@uk.ac.glasgow.cs PLINGnet: ...mcvax!ukc!cs.glasgow.ac.uk!jack
norvell@csri.toronto.edu (Theodore Stevens Norvell) (04/23/89)
In article <2841@crete.cs.glasgow.ac.uk> jack@cs.glasgow.ac.uk (Jack Campin) makes some good points, but wrote: > >alan@rnms1.paradyne.com (Alan Lovejoy) wrote: >> To say the same thing more concretely: numbers, characters, data types, >> procedures, variables, addresses, blocks, execution contexts, name bindings, >> scopes, processes and the number of bits in an integer must all be "first >> class objects." > >You and Harland may mean that by "polymorphic", but the rest of the world >certainly doesn't. The mathematically understood senses of polymorphism are >(1) the Hindley type system, as implemented in Milner's ML and most functional >languages since, and (2) Girard's more powerful type system F, reinvented by >Reynolds and not fully implemented in any language I can think of [...] > Perhaps this is a quibble, but surely you are confusing "polymorphism" with "polymorphic type inference". Consider the following quotation describing polymorphic programming: A widely employed style of programming, particularly in structure- processing languages which impose no discipline of types (LISP is a perfect example), entails defining procedures which work well on objects of a wide variety (e.g, on lists of atoms, integers, or lists). Surely Harland's Poly is another perfect example. In Poly can one not define procedures which work well on objects of a very wide variety? The quotation, by the way, is from Milner's seminal paper `A Theory of Type Polymorphism in Programming', (J. Comp. & Sys. Sci., 17, 1978) in which he used Hindley's system (without knowing it was Hindley's) to do strong type checking in a lisp-like language (ML). The introduction of this approach did not suddenly make LISP, Prolog, or Smalltalk (or Poly, had it existed at the time) less polymorphic. Nor did it make languages that use other approaches to strong type checking of polymorphic programs (e.g., C++ and Modula-3) less polymorphic. Milner also credits the term "polymorphism" to Strachey in 1967 which predates even Hindley's work (1969). The term "polymorphic type inference", on the other hand describes perfectly the work of Hindley, Milner, Girard, Reynolds, McCracken, Cardelli, Wand, etc. Theo Norvell U of Toronto On the other hand, language is living. Perhaps I'm just out of date.
boehm@flora.rice.edu (Hans Boehm) (04/23/89)
I would like to take issue with a frew of the points raised in the recent discussion of polymorphism, especially by Jack Campin's article. 1. ML is traditionally viewed as the canonical example of a polymorphically typed programming language. Nonetheless there is some legitimate debate whether the language (without the recently added module system) should really be considered polymorphic. ML-style polymorphism (like Ada generics) can always be replaced by macro expansion. (Unlike in Ada, this is not the usual implementation strategy. Also unlike Ada, polymorphic expressions have a type that can be written down.) This is not true of the Girard-Reynolds system. 2. Several programming languages based in the Girard-Reynolds lambda calculus have been successfully implemented and used. The ones I know about are Russell (with which I have been involved), the language underlying IBM's Scratchpad II symbolic algebra system, and Dave Mathews' Poly. Some recent language designs from Waterloo probably also fit into this category. The type systems used by these languages are all variations on the Reynolds system. But they all must address essentially the same issues that would have arisen if the Reynolds system had been used directly. 3. It is no harder to define the semantics of one of these languages than that of any other real programming languages. This is an issue that is largely independent of the type system. Indeed, the semantics are usually specified without reference to the type annotations in the program. Empirically, a program written in one of these languages is much more likely to be correct (once it compiles) than a C or Lisp program. (The same observation applies to ML, for those programs that can easily be written in ML. Unfortunately, these statements are all based on anecdotal evidence.) Note that giving a semantics to a program is orthogonal to that of giving semantics to types, since we usually specify program semantics as though the program were untyped. Giving semantics to types, and proving that the type system insures certain execution time properties is harder, and is certainly still a subject of research. But the real problem here is that we don't entirely understand what properties should be insured by a type checking system, independent of the particular language. I claim that this problem is as hard to solve for Pascal as it is for Russell or Scratchpad II. 4. Type checking (with programmer specified types) is well understood for these languages. Type RECONSTRUCTION (or inference) in the ML sense is harder, but is also beginning to be understood. See the paper by Frank Pfenning in the last Lisp conference, or one of the two papers on the subject in the upcoming SIGPLAN conference. To date, this has probably been the most serious implementation obstacle for these languages. 5. A number of these systems allow a type of all types, which is an "element of" itself. This causes problems with the "propositions as types" analogy, which is important in other contexts. It also causes some slight complications in the implementation, since there is no longer a clear distinction between type and non-type values. Thus types may have to be represented at run-time. (However it usually doesn't matter what they are represented as, only that space is reserved for them as for other values. Even this can usually be optimized away.) I know of no real sense in which it makes a programming language type system inconsistent. (The "proposition as types" analogy is also hard to preserve if it is possible to write any programs that do not terminate. I know of no real programming languages in which all programs terminate.) Hans-J. Boehm boehm@rice.edu
alan@rnms1.paradyne.com (Alan Lovejoy) (04/23/89)
In article <2841@crete.cs.glasgow.ac.uk< jack@cs.glasgow.ac.uk (Jack Campin) writes:
<> I wrote:
<> Full polymorphism means that any component or aspect of a program, program
<> component or programming language must behave as a value. Any and all
<> values must be subject to inspection, modification, storage in memory,
<> usage as a component part of some composite value, usage as a paremeter,
<> usage as an operand in an expression, being computed as the result of
<> evaluating an expression or function, and (re)definition of its form,
<> structure, content or name. The protocol for invoking these common
<> operations must be the same for all values, regardless of type.
<
<> To say the same thing more concretely: numbers, characters, data types,
<> procedures, variables, addresses, blocks, execution contexts, name bindings,
<> scopes, processes and the number of bits in an integer must all be "first
<> class objects."
<
<You and Harland may mean that by "polymorphic", but the rest of the world
<certainly doesn't.
I deliberately defined it as extremely as I could. I knew it was
controversial. I wanted to start a discussion. It worked, didn't it?
<The mathematically understood senses of polymorphism are
<(1) the Hindley type system, as implemented in Milner's ML and most functional
<languages since, and (2) Girard's more powerful type system F, reinvented by
<Reynolds and not fully implemented in any language I can think of (Fairbairn's
<Ponder comes closest; Mitchell and Plotkin's SOL and Barendregt's TALE do it
<on paper).
<Yes, you might be able to hack together a language that provided all those
<motherhood features (though as far as I know Harland's team at Linn haven't
<managed to implement any significant fragment of Poly in the five years since
<that book came out). But you wouldn't have a hope of reasoning about programs
<written in it, and the appropriate formalism for verifying the compiler would
<be chicken sacrifices and moving your mouse in pentagrams over the source code.
The definition I gave is motivated by denotational semantics. Whether its
mathematical properties are completely understood is a completely separate
question. All I gave was a definition. I didn't say that any language
does or could live up to it, or that it would be worthwile to create a
language that could.
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne. They do not speak for me.
_________________________Design Flaws Travel In Herds_________________________
Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.
db@lfcs.ed.ac.uk (Dave Berry) (05/02/89)
In article <3150@kalliope.rice.edu> boehm@flora.rice.edu (Hans Boehm) writes: > There is some legitimate debate whether ML (without the recently added > module system) should really be considered polymorphic. ML-style > polymorphism (like Ada generics) can always be replaced by macro expansion. > (Unlike in Ada, this is not the usual implementation strategy. Also unlike > Ada, polymorphic expressions have a type that can be written down.) I'm having trouble understanding much of this debate. In particular, how can ML style polymorphism be replaced by macro expansion? Also, do you consider the module system to be polymorphic? (I don't. In fact it's much more like ADA than the core language. It provides type parameterisation rather than polymorphism.) My definition of a polymorphic function is one that can be applied to objects of more than one type. Examples are the universal polymorphism of ML and the bounded polymorphism of typed systems with inheritance. A similar feature is overloading, a.k.a. ad-hoc polymorphism. This is a polymorphism of a name rather than a function itself; a name may refer to functions of more than one type, depending on the types of its arguments. Examples include static overloading of C++, Ada etc., and dynamic overloading of C++ virtual functions, Eiffel etc. I don't see how "treating everything as a value" affects this definition. If everything can be treated as a value then universal polymorphism will apply to everything, if not it will only apply to values (and not types). But this seems to be a question of defining the "type of a type", rather than a definition of polymorphism. Could Hans or Alan explain their points further? "See, this is why I got off drugs. Who can tell the difference anymore?" Dave Berry, Laboratory for Foundations of Computer Science, Edinburgh. db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk <Atlantic Ocean>!mcvax!ukc!lfcs!db
alan@oz.nm.paradyne.com (Alan Lovejoy) (05/03/89)
>My definition of a polymorphic function is one that can be applied >to objects of more than one type. Examples are the universal polymorphism >of ML and the bounded polymorphism of typed systems with inheritance. This is the result of type polymorphism when applied to functions. An array which can accept values of any type as elements (e.g., the first element is an integer, the second element is a BitBlt, etc.) is the result of type polymorphism when applied to data structures. An abstraction mechanism for constructing array types, which can be used to construct array types of any index and/or element type (e.g., Pascal has an array type constructor which can construct array types of any element type and any *ordinal* index type: "ARRAY <ordinal type> OF <element type>") is the result of type polymorphism when applied to metaclasses (a metaclass is an abstraction mechanism which constructs classes (types)). Pascal's array-type constructor can be thought of as a function which returns array types given two types as arguments. Notice that this is one of the only two ways in which types act as values in Pascal (the other is when a type is an argument to the SIZEOF pseudofunction). >A similar feature is overloading, a.k.a. ad-hoc polymorphism. This is >a polymorphism of a name rather than a function itself; a name may refer >to functions of more than one type, depending on the types of its arguments. >Examples include static overloading of C++, Ada etc., and dynamic overloading >of C++ virtual functions, Eiffel etc. Overloading does not make a function polymorphic. It may enable the definition of polymorphic abstractions and/or abstraction mechanisms, however. A function which, among other things, invokes the "Print" function on its argument will be more polymorphic if "Print" has been overloaded for many value types. >I don't see how "treating everything as a value" affects this definition. If >everything can be treated as a value then universal polymorphism will apply >to everything, if not it will only apply to values (and not types). But this >seems to be a question of defining the "type of a type", rather than a >definition of polymorphism. My definition of polymorphism had TWO essential components: 1) Value universality 2) Protocol universality Overloading is a mechanism for accomplishing protocol universality. >Could Hans or Alan explain their points further? Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL. Disclaimer: I do not speak for AT&T Paradyne. They do not speak for me. _________________________Design Flaws Travel In Herds_________________________ Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.
boehm@flora.rice.edu (Hans Boehm) (05/03/89)
In article <1898@etive.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes: >In article <3150@kalliope.rice.edu> boehm@flora.rice.edu (Hans Boehm) writes: >> There is some legitimate debate whether ML (without the recently added >> module system) should really be considered polymorphic. ML-style >> polymorphism (like Ada generics) can always be replaced by macro expansion. >> (Unlike in Ada, this is not the usual implementation strategy. Also unlike >> Ada, polymorphic expressions have a type that can be written down.) > >I'm having trouble understanding much of this debate. In particular, how can >ML style polymorphism be replaced by macro expansion? Also, do you consider >the module system to be polymorphic? (I don't. In fact it's much more like >ADA than the core language. It provides type parameterisation rather than >polymorphism.) > >My definition of a polymorphic function is one that can be applied >to objects of more than one type ... If we (at least temporarily) agree on this definition, there are still at least two ways to accomplish this. I'll use the identity function to illustrate. We may either continue to write the identity function as lambda x . x and consider it to have type forall T . T -> T Equivalently, we may introduce the type T as an explicit parameter to the identity function. We get lambda T . lambda x:T . x or perhaps lambda (T, x:T) . x In Reynold's polymorphically typed lambda calculus the type of the first form is written as something like Delta T . T -> T In this case, the identity function must be invoked by applying it both to a type and its "real" argument. In their full generality (and ignoring questions related to the amount of user specified type information) both views are equivalent. There is a direct correspondence between "forall" and "Delta". Applications to types correspond to replacement of a type variable by a specific type. (See Leivant's paper in the 10th POPL proceedings for more details.) ML (without modules) uses a restricted form of the first view, in which "forall"s can appear only at the outer level of a type. Types such as (forall T . T -> T) -> int are illegal. (Equivalently, there is no way to parametrize functions with repect to polymorphic functions.) At least theoretically, this is a severe restriction. If we take any applicative core ML program, and substitute the right sides let declarations for left sides of let declarations, we obtain an equivalent nonpolymorphic program. For example: let I = lambda x . x ==> ((lambda x: int -> int . x) in (lambda x: int . x)) 3 (I I) 3 In particular, the addition of an ML-style polymorphic let to the simply typed lambda-calculus does not add to its expressive power. The addition of fully general quantification or type parametrization does, greatly. How much this matters in practice is open to debate. I would argue that it does matter, but probably primarily when it comes to putting together large systems. The standard ML module system uses a type parametrization view. My impression is that it also limits parametrization to the outer level, though perhaps not for a good reason. (I could be completely wrong here, though.) What core-ML gains over the more general disciplines is that it appears to be the most general system in which Hindley-Milner automatic type inference is known to work completely, and thus the programmer has to specify essentially no type information. (Note that the type inferencer in fact determines where type variables are instantiated. Thus it could also infer a program with explicit type parametrization and application, but again all parametrization would have to be at the outer level.) It loses in expressiveness. Also the restrictions seem to necessitate additional complications elsewhere (a separate module system, the treatment of variables). My (biased) opinion is that in the long run this isn't worth it. In the short run, it is probably the best system we know how to implement really well. But those of us working on more general systems are gaining ... In any case, I don't think that the ML type system should be used to DEFINE the notion of polymorphism. Hans-J. Boehm boehm@rice.edu
db@lfcs.ed.ac.uk (Dave Berry) (05/06/89)
In article <3215@kalliope.rice.edu> boehm@flora.rice.edu (Hans Boehm) writes: >In article <3150@kalliope.rice.edu> boehm@flora.rice.edu (Hans Boehm) writes: >> There is some legitimate debate whether ML (without the recently added >> module system) should really be considered polymorphic. > >In any case, I don't think that the ML type system should be used to DEFINE >the notion of polymorphism. I agree with the second quote, sine ML's polymorphism isn't as general as other systems, but not the first. "You never smile, you know it wouldn't look right. 'Cause your dentures glow in ultra-violet light..." Dave Berry, Laboratory for Foundations db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk of Computer Science, Edinburgh Uni. <Atlantic Ocean>!mcvax!ukc!lfcs!db
db@lfcs.ed.ac.uk (Dave Berry) (05/06/89)
In article <6057@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes: >>My definition of a polymorphic function is one that can be applied >>to objects of more than one type. > >This is the result of type polymorphism when applied to functions. >An array which can accept values of any type as elements is the result >of type polymorphism when applied to data structures. True. I usually think of data constructors as functions too. This comes of working on the ML project. >An abstraction mechanism which can be used to construct array types of any >index and/or element type is the result of type polymorphism when applied to >metaclasses (a metaclass is an abstraction mechanism which constructs types). That's an interesting view. The approach taken round here is to use the idea of dependent function types from type theory (as in Martin-Lo"f type theory, Calculus of Constructions, etc.) It still seems to me that the question of treating everything as a value is orthogonal to the question of what is polymorphism. (Obviously the combination makes languages more expressive.) >My definition of polymorphism had TWO essential components: > >1) Value universality >2) Protocol universality > >Overloading is a mechanism for accomplishing protocol universality. Is it the only such mechanism that you were thinking of, or were you also thinking of the mechanism that I call polymorphism? >Overloading does not make a function polymorphic. It may enable the >definition of polymorphic abstractions and/or abstraction mechanisms, however. >A function which, among other things, invokes the "Print" function on its >argument will be more polymorphic if "Print" has been overloaded for many >value types. Providing that the type system can specify the type of the function (if you're using a type system). "You never smile, you know it wouldn't look right. 'Cause your dentures glow in ultra-violet light..." Dave Berry, Laboratory for Foundations db%lfcs.ed.ac.uk@nsfnet-relay.ac.uk of Computer Science, Edinburgh Uni. <Atlantic Ocean>!mcvax!ukc!lfcs!db
alan@oz.nm.paradyne.com (Alan Lovejoy) (05/10/89)
In article <1943@etive.ed.ac.uk> db@lfcs.ed.ac.uk (Dave Berry) writes: >In article <6057@pdn.paradyne.com> alan@oz.paradyne.com (Alan Lovejoy) writes: >>An abstraction mechanism which can be used to construct array types of any >>index and/or element type is the result of type polymorphism when applied to >>metaclasses (a metaclass is an abstraction mechanism which constructs types). > >That's an interesting view. The approach taken round here is to use the >idea of dependent function types from type theory (as in Martin-Lo"f >type theory, Calculus of Constructions, etc.) > >It still seems to me that the question of treating everything as a value >is orthogonal to the question of what is polymorphism. (Obviously >the combination makes languages more expressive.) See below. >>My definition of polymorphism had TWO essential components: >> >>1) Value universality >>2) Protocol universality >> >>Overloading is a mechanism for accomplishing protocol universality. > >Is it the only such mechanism that you were thinking of, or were you >also thinking of the mechanism that I call polymorphism? I thought polymorphism was a property, not a mechanism. Could you explain how your conceptualization of polymorphism qualifies as a mechanism? If I say that language X permits functions to accept and produce values of any type, would you say that that language is highly polymorphic? What if langauge X only has two value types: integer and real? For theoretical reasons, it is productive to view all actions that occur in the compilation and/or execution of a program as the result of the evaluation of functions. Hence, the definition of new type should be viewed as a service provided by a compile-time function ("compile-time" in most languages, anyway). The range of definable data types in a language is obviously a function of the polymorphism of the type-creation function. Similarly, the production of values of a type should be viewed as a service provided by an instantiation function. Obviously, the polymorphism of this instantiating function determines what can be a value in a language. This is how my definition of polymorphism can be derived from the definition which you are used to. Values are instances of a type (objects are instances of a class). The requirement for protocol universality is related to the requirement for value universality by this fact, because the existence of the ability, and the protocol, to instantiate integers motivates the existence of the ability to instantiate other types USING THE SAME INSTANTIATION PROTOCOL. By the same reasoning, the ability to define the integer type motivates the ability to define other types USING THE SAME TYPE DEFINITION PROTOCOL. In other words, if types are the result of functions which evaluate to types, then polymorphism demands that the type definition function be able to produce multiple types. Given multiple types, polymorphism demands that the value instantiation function accept many types as an input parameter. From this perspective, functions, even notional ones, that are available to the compiler but not to the programmer, impair polymorphism. Denial of the services of a function is the same thing as prohibiting the use of a type (or its instances). FUNCTIONS ARE VALUES, TOO. Second class functions are second class values. Ask not only what types are in the domain of your functions, but also what functions are defined on your types. Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL. Disclaimer: I do not speak for AT&T Paradyne. They do not speak for me. _________________________Design Flaws Travel In Herds_________________________ Motto: If nanomachines will be able to reconstruct you, YOU AREN'T DEAD YET.
chin@sg1.chem.upenn.edu (Chin Wu) (04/10/90)
If we use a Base pointer to construct a Derived object, will the Derived destructer be used when been deleted? I have used the following code to explore: -------------------------- // test derived constructer and destructer #include <stream.h> class Base { public: Base() { cout << "This is base constructer\n";}; ~Base() { cout << "This is base destructer\n";}; }; class Derived : public Base { public: Derived() { cout << "This is derived constructer\n";}; ~Derived() { cout << "This is derived destructer\n";}; }; main() { Base* test; test = new Derived; delete test; } ---------------------------- And the output as follow: This is base constructer This is derived constructer This is base destructer So the derived destructer won't be called if we construct the Derived object by Base pointer. But isn't it more natural to call both? If I misunderstood the C++, is there anyway you can call Derived class destructer? ps: I am using GNU G++, maybe AT&T behave differently. -- chin@sg1.chem.upenn.edu
mleblanc@athena.mit.edu (Marc LeBlanc) (04/10/90)
> If we use a Base pointer to construct a Derived object, will the >Derived destructer be used when been deleted? I have used the >following code to explore: >-------------------------- >// test derived constructer and destructer >#include <stream.h> >class Base >{ > public: > Base() { cout << "This is base constructer\n";}; > ~Base() { cout << "This is base destructer\n";}; >}; >class Derived : public Base >{ > public: > Derived() { cout << "This is derived constructer\n";}; > ~Derived() { cout << "This is derived destructer\n";}; >}; >main() >{ > Base* test; > test = new Derived; > delete test; >} >---------------------------- >And the output as follow: >This is base constructer >This is derived constructer >This is base destructer >So the derived destructer won't be called if we construct the Derived >object by Base pointer. But isn't it more natural to call both? If I >misunderstood the C++, is there anyway you can call Derived class >destructer? I think this would qualify as an "undocumented feature." If you declare the base destructor as virtual, you get the desired effect. (I tried it). Shouldn't this be the default? ------------------------------------------------------------------------ Marc LeBlanc mleblanc@athena.mit.edu
horstman@sjsumcs.sjsu.edu (Cay Horstmann) (04/11/90)
In article <CHIN.90Apr9174555@sg1.chem.upenn.edu> chin@sg1.chem.upenn.edu (Chin Wu) writes: > > If we use a Base pointer to construct a Derived object, will the >Derived destructer be used when been deleted? I have used the >following code to explore: > >class Base >{ > public: > Base() { cout << "This is base constructer\n";}; > ~Base() { cout << "This is base destructer\n";}; >}; > >class Derived : public Base >{ > public: > Derived() { cout << "This is derived constructer\n";}; > ~Derived() { cout << "This is derived destructer\n";}; >}; > >main() >{ > Base* test; > > test = new Derived; > delete test; >} >... is there anyway you can call Derived class >destructer? > Yes. Declare the destructor to be virtual (in the Base.) Cay
pjc@cs.man.ac.uk (Peter Crowther (CAG ra)) (04/11/90)
In article <1990Apr10.112351.24301@athena.mit.edu> mleblanc@athena.mit.edu (Marc LeBlanc) writes: >>So the derived destructer won't be called if we construct the Derived >>object by Base pointer. But isn't it more natural to call both? If I >>misunderstood the C++, is there anyway you can call Derived class >>destructer? > >I think this would qualify as an "undocumented feature." If you >declare the base destructor as virtual, you get the desired effect. (I >tried it). Shouldn't this be the default? IMHO, no. Reason: I dislike languages that force me to write inefficient code (it's what comes of writing a lot of stuff on a Sun-2 (that's *2*) file server supporting about 10 clients). The default in C++ is to save the space in each object that would otherwise be allocated to the virtual function table pointer. It also maintains compatibility with C structures if the default is not to tag another pointer onto the end of each structure. That's also important to me; I have interfaces from C++ to SunView, RPC, XDR, Ingres and a bunch of C libraries. Several of these would probably screw up if virtual destructors were used. This is a weaker argument; the counterargument is 'Anyone trying to interface C++ and C should have enough nouse to change the default when they need to'. I'm just lazy. - Peter -- Peter Crowther, Dept. of Electrical Engineering, University of Manchester, Manchester M13 9PL, England. Internet: pcrowther@r1.cs.man.ac.uk Janet: pcrowther@uk.ac.man.cs.r1 ARPA: pcrowther%cs.man.ac.uk@nsfnet-relay.ac.uk OtherNets: No idea at all!
jimad@microsoft.UUCP (Jim ADCOCK) (04/12/90)
In article <CHIN.90Apr9174555@sg1.chem.upenn.edu> chin@sg1.chem.upenn.edu (Chin Wu) writes: > If we use a Base pointer to construct a Derived object, will the >Derived destructer be used when been deleted? Yes -- Iff you tell the compiler that you want polymorphic behavior for the destructor functions by declaring them "virtual." In general, in C++, whenever you want the "correct" function called [explicetly or implicitly] through a pointer or reference of a type of a parent class, rather than a pointer or reference of the actual type of the object, and when you have differing versions of the function in the base and the derived classes --then you need to declare the functions involved "virtual."
jimad@microsoft.UUCP (Jim ADCOCK) (05/01/90)
>"leaf" classes are a myth - identify any class as a "leaf" >an I will immediately prove you wrong by deriving from it. Okay, I'll bite. I claim the following class *is* a leaf class. Prove me wrong by deriving from it: class LEAF { private: LEAF() {} LEAF(const LEAF&) {} LEAF& operator=(const LEAF&) { return *this; } public: void Print() {printf("I'm a leaf\n");} static LEAF& New() { return *new LEAF; } }; In any case, I believe this is the same old argument again. If one believes that inheritance is the more important property, one insists that encapsulation must be subjugated to inheritance. If one believes that encapsulation is more important than inheritance, one points out that inheritance can always break a superclass. My claim would be that since in real life more than half the classes people create *are* leaf classes, that is, classes which are *not* being derived from, maybe it would behoove one to admit to such a distinction, and use it to one's advantage. This need not be an end to code reuse. One can imagine placing the bulk of one's effort into reusable "parent" classes, and only placing a small amount to code into the non-reusable leaf classes. The advantage would be that the leaf classes are now a strictly encapsulated unit, and people can demonstrably argue about whether those leaf classes work right or not -- such arguments can never be resolved for non-leaf classes!
DEREK@apple.com (Derek White) (05/04/90)
As has been discused in previous postings, virtual functions CAN be optimized by smart tools towards the end of the build cycle. In the Mac's MPW Object Pascal environment, all member functions start out "virtual" (polymorphic), but the linker analyzes the object code and inheritance structure, deduces which methods are "leaf" methods, and generates direct calls to the method instead going through a method dispatcher (which is SLOW). Note that it does this for the final program, not the libraries. The MPW linker has been doing this for years. CFront was designed to work with generic compilers, linkers, etc, but significant improvements can be made by designing a linker that could replace virtual calls to leaf member functions with direct functions, and strip vtables of references to these "leaf" virtual methods. Environments that have shared object libaries would not be as optimizable, but it would still help. There are still occassions when it would be useful to designate non-virtual functions. My guess is that 99.9% of private member functions should be non-virtual, and that around 80%-90%? of protected/public member functions should be virtual if an optimizing scheme existed. It starts to make a good case for changing the defaults.
gvc87r@ecs.soton.ac.uk (Gary Chambers) (02/21/91)
Hello. Does anybody know for certain whether it is possible or impossible/impractical for two classes having virtual member functions with the same name, args etc. to use dynamic binding. My principal concern is in the use of class libraries where I would like to use objects of a given class in one library, say, and also objects from another library using dynamic binding to call a certain procedure where the actual class of the object (one from either of the libraries) is not known at compile time. Thanks in advance, Gary Chambers, Dept. Electronics & Computer Science, University of Southampton, U.K.